00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2003 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3264 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.129 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.129 The recommended git tool is: git 00:00:00.129 using credential 00000000-0000-0000-0000-000000000002 00:00:00.131 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.164 Fetching changes from the remote Git repository 00:00:00.165 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.192 Using shallow fetch with depth 1 00:00:00.192 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.192 > git --version # timeout=10 00:00:00.219 > git --version # 'git version 2.39.2' 00:00:00.219 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.238 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.238 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.425 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.435 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.446 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:07.446 > git config core.sparsecheckout # timeout=10 00:00:07.456 > git read-tree -mu HEAD # timeout=10 00:00:07.470 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:07.486 Commit message: "inventory: add WCP3 to free inventory" 00:00:07.486 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:07.581 [Pipeline] Start of Pipeline 00:00:07.596 [Pipeline] library 00:00:07.598 Loading library shm_lib@master 00:00:07.598 Library shm_lib@master is cached. Copying from home. 00:00:07.616 [Pipeline] node 00:00:07.626 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.627 [Pipeline] { 00:00:07.635 [Pipeline] catchError 00:00:07.636 [Pipeline] { 00:00:07.646 [Pipeline] wrap 00:00:07.655 [Pipeline] { 00:00:07.660 [Pipeline] stage 00:00:07.662 [Pipeline] { (Prologue) 00:00:07.827 [Pipeline] sh 00:00:08.106 + logger -p user.info -t JENKINS-CI 00:00:08.124 [Pipeline] echo 00:00:08.125 Node: GP11 00:00:08.132 [Pipeline] sh 00:00:08.424 [Pipeline] setCustomBuildProperty 00:00:08.437 [Pipeline] echo 00:00:08.439 Cleanup processes 00:00:08.445 [Pipeline] sh 00:00:08.727 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.727 919230 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.741 [Pipeline] sh 00:00:09.023 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.023 ++ grep -v 'sudo pgrep' 00:00:09.023 ++ awk '{print $1}' 00:00:09.023 + sudo kill -9 00:00:09.024 + true 00:00:09.041 [Pipeline] cleanWs 00:00:09.052 [WS-CLEANUP] Deleting project workspace... 00:00:09.052 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.058 [WS-CLEANUP] done 00:00:09.063 [Pipeline] setCustomBuildProperty 00:00:09.078 [Pipeline] sh 00:00:09.361 + sudo git config --global --replace-all safe.directory '*' 00:00:09.465 [Pipeline] httpRequest 00:00:09.504 [Pipeline] echo 00:00:09.506 Sorcerer 10.211.164.101 is alive 00:00:09.516 [Pipeline] httpRequest 00:00:09.521 HttpMethod: GET 00:00:09.521 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.522 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.543 Response Code: HTTP/1.1 200 OK 00:00:09.544 Success: Status code 200 is in the accepted range: 200,404 00:00:09.545 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:30.410 [Pipeline] sh 00:00:30.691 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:30.708 [Pipeline] httpRequest 00:00:30.754 [Pipeline] echo 00:00:30.755 Sorcerer 10.211.164.101 is alive 00:00:30.763 [Pipeline] httpRequest 00:00:30.766 HttpMethod: GET 00:00:30.767 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:30.768 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:30.769 Response Code: HTTP/1.1 200 OK 00:00:30.770 Success: Status code 200 is in the accepted range: 200,404 00:00:30.770 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:47.672 [Pipeline] sh 00:00:47.959 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:50.508 [Pipeline] sh 00:00:50.789 + git -C spdk log --oneline -n5 00:00:50.789 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:00:50.789 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:00:50.789 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:00:50.789 e03c164a1 nvme: add nvme_ctrlr_lock 00:00:50.789 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:00:50.802 [Pipeline] } 00:00:50.820 [Pipeline] // stage 00:00:50.831 [Pipeline] stage 00:00:50.833 [Pipeline] { (Prepare) 00:00:50.855 [Pipeline] writeFile 00:00:50.875 [Pipeline] sh 00:00:51.154 + logger -p user.info -t JENKINS-CI 00:00:51.166 [Pipeline] sh 00:00:51.445 + logger -p user.info -t JENKINS-CI 00:00:51.457 [Pipeline] sh 00:00:51.735 + cat autorun-spdk.conf 00:00:51.735 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:51.735 SPDK_TEST_NVMF=1 00:00:51.735 SPDK_TEST_NVME_CLI=1 00:00:51.735 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:51.735 SPDK_TEST_NVMF_NICS=e810 00:00:51.735 SPDK_RUN_UBSAN=1 00:00:51.735 NET_TYPE=phy 00:00:51.743 RUN_NIGHTLY=1 00:00:51.748 [Pipeline] readFile 00:00:51.777 [Pipeline] withEnv 00:00:51.779 [Pipeline] { 00:00:51.794 [Pipeline] sh 00:00:52.079 + set -ex 00:00:52.079 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:52.079 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:52.079 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.079 ++ SPDK_TEST_NVMF=1 00:00:52.079 ++ SPDK_TEST_NVME_CLI=1 00:00:52.079 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.079 ++ SPDK_TEST_NVMF_NICS=e810 00:00:52.079 ++ SPDK_RUN_UBSAN=1 00:00:52.079 ++ NET_TYPE=phy 00:00:52.079 ++ RUN_NIGHTLY=1 00:00:52.079 + case $SPDK_TEST_NVMF_NICS in 00:00:52.079 + DRIVERS=ice 00:00:52.079 + [[ tcp == \r\d\m\a ]] 00:00:52.079 + [[ -n ice ]] 00:00:52.079 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:52.079 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:52.079 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:52.079 rmmod: ERROR: Module irdma is not currently loaded 00:00:52.079 rmmod: ERROR: Module i40iw is not currently loaded 00:00:52.079 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:52.079 + true 00:00:52.079 + for D in $DRIVERS 00:00:52.079 + sudo modprobe ice 00:00:52.079 + exit 0 00:00:52.089 [Pipeline] } 00:00:52.106 [Pipeline] // withEnv 00:00:52.111 [Pipeline] } 00:00:52.127 [Pipeline] // stage 00:00:52.137 [Pipeline] catchError 00:00:52.139 [Pipeline] { 00:00:52.156 [Pipeline] timeout 00:00:52.156 Timeout set to expire in 50 min 00:00:52.158 [Pipeline] { 00:00:52.175 [Pipeline] stage 00:00:52.177 [Pipeline] { (Tests) 00:00:52.194 [Pipeline] sh 00:00:52.477 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.477 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.477 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.477 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:52.477 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:52.477 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:52.477 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:52.477 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:52.477 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:52.477 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:52.477 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:52.477 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.477 + source /etc/os-release 00:00:52.477 ++ NAME='Fedora Linux' 00:00:52.477 ++ VERSION='38 (Cloud Edition)' 00:00:52.477 ++ ID=fedora 00:00:52.477 ++ VERSION_ID=38 00:00:52.477 ++ VERSION_CODENAME= 00:00:52.477 ++ PLATFORM_ID=platform:f38 00:00:52.477 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:52.477 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:52.477 ++ LOGO=fedora-logo-icon 00:00:52.477 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:52.477 ++ HOME_URL=https://fedoraproject.org/ 00:00:52.477 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:52.477 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:52.477 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:52.477 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:52.477 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:52.477 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:52.477 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:52.477 ++ SUPPORT_END=2024-05-14 00:00:52.477 ++ VARIANT='Cloud Edition' 00:00:52.477 ++ VARIANT_ID=cloud 00:00:52.477 + uname -a 00:00:52.477 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:52.477 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:53.409 Hugepages 00:00:53.409 node hugesize free / total 00:00:53.409 node0 1048576kB 0 / 0 00:00:53.409 node0 2048kB 0 / 0 00:00:53.409 node1 1048576kB 0 / 0 00:00:53.409 node1 2048kB 0 / 0 00:00:53.409 00:00:53.409 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:53.409 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:53.409 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:53.409 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:53.409 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:53.409 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:53.409 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:53.409 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:53.409 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:53.409 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:53.409 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:53.409 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:53.409 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:53.409 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:53.409 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:53.409 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:53.409 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:53.409 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:53.409 + rm -f /tmp/spdk-ld-path 00:00:53.409 + source autorun-spdk.conf 00:00:53.409 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.409 ++ SPDK_TEST_NVMF=1 00:00:53.409 ++ SPDK_TEST_NVME_CLI=1 00:00:53.409 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:53.409 ++ SPDK_TEST_NVMF_NICS=e810 00:00:53.409 ++ SPDK_RUN_UBSAN=1 00:00:53.409 ++ NET_TYPE=phy 00:00:53.409 ++ RUN_NIGHTLY=1 00:00:53.409 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:53.409 + [[ -n '' ]] 00:00:53.409 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:53.409 + for M in /var/spdk/build-*-manifest.txt 00:00:53.409 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:53.409 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:53.409 + for M in /var/spdk/build-*-manifest.txt 00:00:53.409 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:53.410 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:53.410 ++ uname 00:00:53.410 + [[ Linux == \L\i\n\u\x ]] 00:00:53.410 + sudo dmesg -T 00:00:53.410 + sudo dmesg --clear 00:00:53.410 + dmesg_pid=919893 00:00:53.410 + [[ Fedora Linux == FreeBSD ]] 00:00:53.410 + sudo dmesg -Tw 00:00:53.410 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:53.410 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:53.410 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:53.410 + [[ -x /usr/src/fio-static/fio ]] 00:00:53.410 + export FIO_BIN=/usr/src/fio-static/fio 00:00:53.410 + FIO_BIN=/usr/src/fio-static/fio 00:00:53.410 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:53.410 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:53.410 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:53.410 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:53.410 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:53.410 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:53.410 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:53.410 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:53.410 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:53.410 Test configuration: 00:00:53.410 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.410 SPDK_TEST_NVMF=1 00:00:53.410 SPDK_TEST_NVME_CLI=1 00:00:53.410 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:53.410 SPDK_TEST_NVMF_NICS=e810 00:00:53.410 SPDK_RUN_UBSAN=1 00:00:53.410 NET_TYPE=phy 00:00:53.410 RUN_NIGHTLY=1 05:55:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:53.410 05:55:59 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:53.410 05:55:59 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:53.410 05:55:59 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:53.410 05:55:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:53.410 05:55:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:53.410 05:55:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:53.410 05:55:59 -- paths/export.sh@5 -- $ export PATH 00:00:53.410 05:55:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:53.410 05:55:59 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:53.410 05:55:59 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:53.410 05:55:59 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720842959.XXXXXX 00:00:53.410 05:55:59 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720842959.2Zq3gs 00:00:53.410 05:55:59 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:53.410 05:55:59 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:53.410 05:55:59 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:53.410 05:55:59 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:53.410 05:55:59 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:53.410 05:55:59 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:53.410 05:55:59 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:00:53.410 05:55:59 -- common/autotest_common.sh@10 -- $ set +x 00:00:53.410 05:55:59 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:00:53.410 05:55:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:53.410 05:55:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:53.410 05:55:59 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:53.410 05:55:59 -- spdk/autobuild.sh@16 -- $ date -u 00:00:53.410 Sat Jul 13 03:55:59 AM UTC 2024 00:00:53.410 05:55:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:53.410 LTS-59-g4b94202c6 00:00:53.410 05:55:59 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:53.410 05:55:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:53.410 05:55:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:53.410 05:55:59 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:00:53.410 05:55:59 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:00:53.410 05:55:59 -- common/autotest_common.sh@10 -- $ set +x 00:00:53.410 ************************************ 00:00:53.410 START TEST ubsan 00:00:53.410 ************************************ 00:00:53.410 05:55:59 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:00:53.410 using ubsan 00:00:53.410 00:00:53.410 real 0m0.000s 00:00:53.410 user 0m0.000s 00:00:53.410 sys 0m0.000s 00:00:53.410 05:55:59 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:53.410 05:55:59 -- common/autotest_common.sh@10 -- $ set +x 00:00:53.410 ************************************ 00:00:53.410 END TEST ubsan 00:00:53.410 ************************************ 00:00:53.673 05:55:59 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:53.673 05:55:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:53.673 05:55:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:53.673 05:55:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:53.673 05:55:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:53.673 05:55:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:53.673 05:55:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:53.673 05:55:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:53.673 05:55:59 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:53.673 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:53.673 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:53.946 Using 'verbs' RDMA provider 00:01:04.172 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:14.183 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:14.183 Creating mk/config.mk...done. 00:01:14.183 Creating mk/cc.flags.mk...done. 00:01:14.183 Type 'make' to build. 00:01:14.183 05:56:20 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:14.183 05:56:20 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:14.183 05:56:20 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:14.183 05:56:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.183 ************************************ 00:01:14.183 START TEST make 00:01:14.183 ************************************ 00:01:14.183 05:56:20 -- common/autotest_common.sh@1104 -- $ make -j48 00:01:14.443 make[1]: Nothing to be done for 'all'. 00:01:22.579 The Meson build system 00:01:22.579 Version: 1.3.1 00:01:22.579 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:22.579 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:22.579 Build type: native build 00:01:22.579 Program cat found: YES (/usr/bin/cat) 00:01:22.579 Project name: DPDK 00:01:22.579 Project version: 23.11.0 00:01:22.579 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:22.579 C linker for the host machine: cc ld.bfd 2.39-16 00:01:22.579 Host machine cpu family: x86_64 00:01:22.579 Host machine cpu: x86_64 00:01:22.579 Message: ## Building in Developer Mode ## 00:01:22.580 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:22.580 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:22.580 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:22.580 Program python3 found: YES (/usr/bin/python3) 00:01:22.580 Program cat found: YES (/usr/bin/cat) 00:01:22.580 Compiler for C supports arguments -march=native: YES 00:01:22.580 Checking for size of "void *" : 8 00:01:22.580 Checking for size of "void *" : 8 (cached) 00:01:22.580 Library m found: YES 00:01:22.580 Library numa found: YES 00:01:22.580 Has header "numaif.h" : YES 00:01:22.580 Library fdt found: NO 00:01:22.580 Library execinfo found: NO 00:01:22.580 Has header "execinfo.h" : YES 00:01:22.580 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:22.580 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:22.580 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:22.580 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:22.580 Run-time dependency openssl found: YES 3.0.9 00:01:22.580 Run-time dependency libpcap found: YES 1.10.4 00:01:22.580 Has header "pcap.h" with dependency libpcap: YES 00:01:22.580 Compiler for C supports arguments -Wcast-qual: YES 00:01:22.580 Compiler for C supports arguments -Wdeprecated: YES 00:01:22.580 Compiler for C supports arguments -Wformat: YES 00:01:22.580 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:22.580 Compiler for C supports arguments -Wformat-security: NO 00:01:22.580 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:22.580 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:22.580 Compiler for C supports arguments -Wnested-externs: YES 00:01:22.580 Compiler for C supports arguments -Wold-style-definition: YES 00:01:22.580 Compiler for C supports arguments -Wpointer-arith: YES 00:01:22.580 Compiler for C supports arguments -Wsign-compare: YES 00:01:22.580 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:22.580 Compiler for C supports arguments -Wundef: YES 00:01:22.580 Compiler for C supports arguments -Wwrite-strings: YES 00:01:22.580 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:22.580 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:22.580 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:22.580 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:22.580 Program objdump found: YES (/usr/bin/objdump) 00:01:22.580 Compiler for C supports arguments -mavx512f: YES 00:01:22.580 Checking if "AVX512 checking" compiles: YES 00:01:22.580 Fetching value of define "__SSE4_2__" : 1 00:01:22.580 Fetching value of define "__AES__" : 1 00:01:22.580 Fetching value of define "__AVX__" : 1 00:01:22.580 Fetching value of define "__AVX2__" : (undefined) 00:01:22.580 Fetching value of define "__AVX512BW__" : (undefined) 00:01:22.580 Fetching value of define "__AVX512CD__" : (undefined) 00:01:22.580 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:22.580 Fetching value of define "__AVX512F__" : (undefined) 00:01:22.580 Fetching value of define "__AVX512VL__" : (undefined) 00:01:22.580 Fetching value of define "__PCLMUL__" : 1 00:01:22.580 Fetching value of define "__RDRND__" : 1 00:01:22.580 Fetching value of define "__RDSEED__" : (undefined) 00:01:22.580 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:22.580 Fetching value of define "__znver1__" : (undefined) 00:01:22.580 Fetching value of define "__znver2__" : (undefined) 00:01:22.580 Fetching value of define "__znver3__" : (undefined) 00:01:22.580 Fetching value of define "__znver4__" : (undefined) 00:01:22.580 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:22.580 Message: lib/log: Defining dependency "log" 00:01:22.580 Message: lib/kvargs: Defining dependency "kvargs" 00:01:22.580 Message: lib/telemetry: Defining dependency "telemetry" 00:01:22.580 Checking for function "getentropy" : NO 00:01:22.580 Message: lib/eal: Defining dependency "eal" 00:01:22.580 Message: lib/ring: Defining dependency "ring" 00:01:22.580 Message: lib/rcu: Defining dependency "rcu" 00:01:22.580 Message: lib/mempool: Defining dependency "mempool" 00:01:22.580 Message: lib/mbuf: Defining dependency "mbuf" 00:01:22.580 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:22.580 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:22.580 Compiler for C supports arguments -mpclmul: YES 00:01:22.580 Compiler for C supports arguments -maes: YES 00:01:22.580 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:22.580 Compiler for C supports arguments -mavx512bw: YES 00:01:22.580 Compiler for C supports arguments -mavx512dq: YES 00:01:22.580 Compiler for C supports arguments -mavx512vl: YES 00:01:22.580 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:22.580 Compiler for C supports arguments -mavx2: YES 00:01:22.580 Compiler for C supports arguments -mavx: YES 00:01:22.580 Message: lib/net: Defining dependency "net" 00:01:22.580 Message: lib/meter: Defining dependency "meter" 00:01:22.580 Message: lib/ethdev: Defining dependency "ethdev" 00:01:22.580 Message: lib/pci: Defining dependency "pci" 00:01:22.580 Message: lib/cmdline: Defining dependency "cmdline" 00:01:22.580 Message: lib/hash: Defining dependency "hash" 00:01:22.580 Message: lib/timer: Defining dependency "timer" 00:01:22.580 Message: lib/compressdev: Defining dependency "compressdev" 00:01:22.580 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:22.580 Message: lib/dmadev: Defining dependency "dmadev" 00:01:22.580 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:22.580 Message: lib/power: Defining dependency "power" 00:01:22.580 Message: lib/reorder: Defining dependency "reorder" 00:01:22.580 Message: lib/security: Defining dependency "security" 00:01:22.580 Has header "linux/userfaultfd.h" : YES 00:01:22.580 Has header "linux/vduse.h" : YES 00:01:22.580 Message: lib/vhost: Defining dependency "vhost" 00:01:22.580 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:22.580 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:22.580 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:22.580 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:22.580 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:22.580 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:22.580 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:22.580 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:22.580 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:22.580 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:22.580 Program doxygen found: YES (/usr/bin/doxygen) 00:01:22.580 Configuring doxy-api-html.conf using configuration 00:01:22.580 Configuring doxy-api-man.conf using configuration 00:01:22.580 Program mandb found: YES (/usr/bin/mandb) 00:01:22.580 Program sphinx-build found: NO 00:01:22.580 Configuring rte_build_config.h using configuration 00:01:22.580 Message: 00:01:22.580 ================= 00:01:22.580 Applications Enabled 00:01:22.580 ================= 00:01:22.580 00:01:22.580 apps: 00:01:22.580 00:01:22.580 00:01:22.580 Message: 00:01:22.580 ================= 00:01:22.580 Libraries Enabled 00:01:22.580 ================= 00:01:22.580 00:01:22.580 libs: 00:01:22.580 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:22.580 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:22.580 cryptodev, dmadev, power, reorder, security, vhost, 00:01:22.580 00:01:22.580 Message: 00:01:22.580 =============== 00:01:22.580 Drivers Enabled 00:01:22.580 =============== 00:01:22.580 00:01:22.580 common: 00:01:22.580 00:01:22.580 bus: 00:01:22.580 pci, vdev, 00:01:22.580 mempool: 00:01:22.580 ring, 00:01:22.580 dma: 00:01:22.580 00:01:22.580 net: 00:01:22.580 00:01:22.580 crypto: 00:01:22.580 00:01:22.580 compress: 00:01:22.580 00:01:22.580 vdpa: 00:01:22.580 00:01:22.580 00:01:22.580 Message: 00:01:22.580 ================= 00:01:22.580 Content Skipped 00:01:22.580 ================= 00:01:22.580 00:01:22.580 apps: 00:01:22.580 dumpcap: explicitly disabled via build config 00:01:22.580 graph: explicitly disabled via build config 00:01:22.580 pdump: explicitly disabled via build config 00:01:22.580 proc-info: explicitly disabled via build config 00:01:22.580 test-acl: explicitly disabled via build config 00:01:22.580 test-bbdev: explicitly disabled via build config 00:01:22.580 test-cmdline: explicitly disabled via build config 00:01:22.580 test-compress-perf: explicitly disabled via build config 00:01:22.580 test-crypto-perf: explicitly disabled via build config 00:01:22.580 test-dma-perf: explicitly disabled via build config 00:01:22.580 test-eventdev: explicitly disabled via build config 00:01:22.580 test-fib: explicitly disabled via build config 00:01:22.580 test-flow-perf: explicitly disabled via build config 00:01:22.580 test-gpudev: explicitly disabled via build config 00:01:22.580 test-mldev: explicitly disabled via build config 00:01:22.580 test-pipeline: explicitly disabled via build config 00:01:22.580 test-pmd: explicitly disabled via build config 00:01:22.580 test-regex: explicitly disabled via build config 00:01:22.580 test-sad: explicitly disabled via build config 00:01:22.580 test-security-perf: explicitly disabled via build config 00:01:22.580 00:01:22.580 libs: 00:01:22.580 metrics: explicitly disabled via build config 00:01:22.580 acl: explicitly disabled via build config 00:01:22.580 bbdev: explicitly disabled via build config 00:01:22.580 bitratestats: explicitly disabled via build config 00:01:22.580 bpf: explicitly disabled via build config 00:01:22.580 cfgfile: explicitly disabled via build config 00:01:22.580 distributor: explicitly disabled via build config 00:01:22.580 efd: explicitly disabled via build config 00:01:22.580 eventdev: explicitly disabled via build config 00:01:22.580 dispatcher: explicitly disabled via build config 00:01:22.580 gpudev: explicitly disabled via build config 00:01:22.580 gro: explicitly disabled via build config 00:01:22.580 gso: explicitly disabled via build config 00:01:22.580 ip_frag: explicitly disabled via build config 00:01:22.580 jobstats: explicitly disabled via build config 00:01:22.580 latencystats: explicitly disabled via build config 00:01:22.580 lpm: explicitly disabled via build config 00:01:22.580 member: explicitly disabled via build config 00:01:22.580 pcapng: explicitly disabled via build config 00:01:22.580 rawdev: explicitly disabled via build config 00:01:22.580 regexdev: explicitly disabled via build config 00:01:22.580 mldev: explicitly disabled via build config 00:01:22.580 rib: explicitly disabled via build config 00:01:22.580 sched: explicitly disabled via build config 00:01:22.580 stack: explicitly disabled via build config 00:01:22.580 ipsec: explicitly disabled via build config 00:01:22.581 pdcp: explicitly disabled via build config 00:01:22.581 fib: explicitly disabled via build config 00:01:22.581 port: explicitly disabled via build config 00:01:22.581 pdump: explicitly disabled via build config 00:01:22.581 table: explicitly disabled via build config 00:01:22.581 pipeline: explicitly disabled via build config 00:01:22.581 graph: explicitly disabled via build config 00:01:22.581 node: explicitly disabled via build config 00:01:22.581 00:01:22.581 drivers: 00:01:22.581 common/cpt: not in enabled drivers build config 00:01:22.581 common/dpaax: not in enabled drivers build config 00:01:22.581 common/iavf: not in enabled drivers build config 00:01:22.581 common/idpf: not in enabled drivers build config 00:01:22.581 common/mvep: not in enabled drivers build config 00:01:22.581 common/octeontx: not in enabled drivers build config 00:01:22.581 bus/auxiliary: not in enabled drivers build config 00:01:22.581 bus/cdx: not in enabled drivers build config 00:01:22.581 bus/dpaa: not in enabled drivers build config 00:01:22.581 bus/fslmc: not in enabled drivers build config 00:01:22.581 bus/ifpga: not in enabled drivers build config 00:01:22.581 bus/platform: not in enabled drivers build config 00:01:22.581 bus/vmbus: not in enabled drivers build config 00:01:22.581 common/cnxk: not in enabled drivers build config 00:01:22.581 common/mlx5: not in enabled drivers build config 00:01:22.581 common/nfp: not in enabled drivers build config 00:01:22.581 common/qat: not in enabled drivers build config 00:01:22.581 common/sfc_efx: not in enabled drivers build config 00:01:22.581 mempool/bucket: not in enabled drivers build config 00:01:22.581 mempool/cnxk: not in enabled drivers build config 00:01:22.581 mempool/dpaa: not in enabled drivers build config 00:01:22.581 mempool/dpaa2: not in enabled drivers build config 00:01:22.581 mempool/octeontx: not in enabled drivers build config 00:01:22.581 mempool/stack: not in enabled drivers build config 00:01:22.581 dma/cnxk: not in enabled drivers build config 00:01:22.581 dma/dpaa: not in enabled drivers build config 00:01:22.581 dma/dpaa2: not in enabled drivers build config 00:01:22.581 dma/hisilicon: not in enabled drivers build config 00:01:22.581 dma/idxd: not in enabled drivers build config 00:01:22.581 dma/ioat: not in enabled drivers build config 00:01:22.581 dma/skeleton: not in enabled drivers build config 00:01:22.581 net/af_packet: not in enabled drivers build config 00:01:22.581 net/af_xdp: not in enabled drivers build config 00:01:22.581 net/ark: not in enabled drivers build config 00:01:22.581 net/atlantic: not in enabled drivers build config 00:01:22.581 net/avp: not in enabled drivers build config 00:01:22.581 net/axgbe: not in enabled drivers build config 00:01:22.581 net/bnx2x: not in enabled drivers build config 00:01:22.581 net/bnxt: not in enabled drivers build config 00:01:22.581 net/bonding: not in enabled drivers build config 00:01:22.581 net/cnxk: not in enabled drivers build config 00:01:22.581 net/cpfl: not in enabled drivers build config 00:01:22.581 net/cxgbe: not in enabled drivers build config 00:01:22.581 net/dpaa: not in enabled drivers build config 00:01:22.581 net/dpaa2: not in enabled drivers build config 00:01:22.581 net/e1000: not in enabled drivers build config 00:01:22.581 net/ena: not in enabled drivers build config 00:01:22.581 net/enetc: not in enabled drivers build config 00:01:22.581 net/enetfec: not in enabled drivers build config 00:01:22.581 net/enic: not in enabled drivers build config 00:01:22.581 net/failsafe: not in enabled drivers build config 00:01:22.581 net/fm10k: not in enabled drivers build config 00:01:22.581 net/gve: not in enabled drivers build config 00:01:22.581 net/hinic: not in enabled drivers build config 00:01:22.581 net/hns3: not in enabled drivers build config 00:01:22.581 net/i40e: not in enabled drivers build config 00:01:22.581 net/iavf: not in enabled drivers build config 00:01:22.581 net/ice: not in enabled drivers build config 00:01:22.581 net/idpf: not in enabled drivers build config 00:01:22.581 net/igc: not in enabled drivers build config 00:01:22.581 net/ionic: not in enabled drivers build config 00:01:22.581 net/ipn3ke: not in enabled drivers build config 00:01:22.581 net/ixgbe: not in enabled drivers build config 00:01:22.581 net/mana: not in enabled drivers build config 00:01:22.581 net/memif: not in enabled drivers build config 00:01:22.581 net/mlx4: not in enabled drivers build config 00:01:22.581 net/mlx5: not in enabled drivers build config 00:01:22.581 net/mvneta: not in enabled drivers build config 00:01:22.581 net/mvpp2: not in enabled drivers build config 00:01:22.581 net/netvsc: not in enabled drivers build config 00:01:22.581 net/nfb: not in enabled drivers build config 00:01:22.581 net/nfp: not in enabled drivers build config 00:01:22.581 net/ngbe: not in enabled drivers build config 00:01:22.581 net/null: not in enabled drivers build config 00:01:22.581 net/octeontx: not in enabled drivers build config 00:01:22.581 net/octeon_ep: not in enabled drivers build config 00:01:22.581 net/pcap: not in enabled drivers build config 00:01:22.581 net/pfe: not in enabled drivers build config 00:01:22.581 net/qede: not in enabled drivers build config 00:01:22.581 net/ring: not in enabled drivers build config 00:01:22.581 net/sfc: not in enabled drivers build config 00:01:22.581 net/softnic: not in enabled drivers build config 00:01:22.581 net/tap: not in enabled drivers build config 00:01:22.581 net/thunderx: not in enabled drivers build config 00:01:22.581 net/txgbe: not in enabled drivers build config 00:01:22.581 net/vdev_netvsc: not in enabled drivers build config 00:01:22.581 net/vhost: not in enabled drivers build config 00:01:22.581 net/virtio: not in enabled drivers build config 00:01:22.581 net/vmxnet3: not in enabled drivers build config 00:01:22.581 raw/*: missing internal dependency, "rawdev" 00:01:22.581 crypto/armv8: not in enabled drivers build config 00:01:22.581 crypto/bcmfs: not in enabled drivers build config 00:01:22.581 crypto/caam_jr: not in enabled drivers build config 00:01:22.581 crypto/ccp: not in enabled drivers build config 00:01:22.581 crypto/cnxk: not in enabled drivers build config 00:01:22.581 crypto/dpaa_sec: not in enabled drivers build config 00:01:22.581 crypto/dpaa2_sec: not in enabled drivers build config 00:01:22.581 crypto/ipsec_mb: not in enabled drivers build config 00:01:22.581 crypto/mlx5: not in enabled drivers build config 00:01:22.581 crypto/mvsam: not in enabled drivers build config 00:01:22.581 crypto/nitrox: not in enabled drivers build config 00:01:22.581 crypto/null: not in enabled drivers build config 00:01:22.581 crypto/octeontx: not in enabled drivers build config 00:01:22.581 crypto/openssl: not in enabled drivers build config 00:01:22.581 crypto/scheduler: not in enabled drivers build config 00:01:22.581 crypto/uadk: not in enabled drivers build config 00:01:22.581 crypto/virtio: not in enabled drivers build config 00:01:22.581 compress/isal: not in enabled drivers build config 00:01:22.581 compress/mlx5: not in enabled drivers build config 00:01:22.581 compress/octeontx: not in enabled drivers build config 00:01:22.581 compress/zlib: not in enabled drivers build config 00:01:22.581 regex/*: missing internal dependency, "regexdev" 00:01:22.581 ml/*: missing internal dependency, "mldev" 00:01:22.581 vdpa/ifc: not in enabled drivers build config 00:01:22.581 vdpa/mlx5: not in enabled drivers build config 00:01:22.581 vdpa/nfp: not in enabled drivers build config 00:01:22.581 vdpa/sfc: not in enabled drivers build config 00:01:22.581 event/*: missing internal dependency, "eventdev" 00:01:22.581 baseband/*: missing internal dependency, "bbdev" 00:01:22.581 gpu/*: missing internal dependency, "gpudev" 00:01:22.581 00:01:22.581 00:01:22.840 Build targets in project: 85 00:01:22.840 00:01:22.840 DPDK 23.11.0 00:01:22.840 00:01:22.840 User defined options 00:01:22.840 buildtype : debug 00:01:22.840 default_library : shared 00:01:22.840 libdir : lib 00:01:22.840 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:22.840 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:22.840 c_link_args : 00:01:22.840 cpu_instruction_set: native 00:01:22.840 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:22.840 disable_libs : bbdev,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:22.840 enable_docs : false 00:01:22.840 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:22.840 enable_kmods : false 00:01:22.840 tests : false 00:01:22.840 00:01:22.840 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:23.429 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:23.429 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:23.429 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:23.429 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:23.429 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:23.429 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:23.429 [6/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:23.429 [7/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:23.429 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:23.429 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:23.429 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:23.429 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:23.429 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:23.429 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:23.429 [14/265] Linking static target lib/librte_kvargs.a 00:01:23.429 [15/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:23.429 [16/265] Linking static target lib/librte_log.a 00:01:23.429 [17/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:23.429 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:23.429 [19/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:23.429 [20/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:23.695 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:23.957 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.220 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:24.220 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:24.220 [25/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:24.220 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:24.220 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:24.220 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:24.220 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:24.220 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:24.220 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:24.220 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:24.220 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:24.220 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:24.220 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:24.220 [36/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:24.220 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:24.220 [38/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:24.220 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:24.220 [40/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:24.220 [41/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:24.220 [42/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:24.220 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:24.220 [44/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:24.221 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:24.221 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:24.221 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:24.221 [48/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:24.221 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:24.221 [50/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:24.221 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:24.221 [52/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:24.221 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:24.221 [54/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:24.221 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:24.221 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:24.221 [57/265] Linking static target lib/librte_telemetry.a 00:01:24.221 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:24.221 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:24.479 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:24.479 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:24.479 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:24.479 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:24.479 [64/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:24.479 [65/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:24.479 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:24.479 [67/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:24.479 [68/265] Linking static target lib/librte_pci.a 00:01:24.479 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:24.479 [70/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.742 [71/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:24.742 [72/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:24.742 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:24.742 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:24.742 [75/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:24.742 [76/265] Linking target lib/librte_log.so.24.0 00:01:24.742 [77/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:24.742 [78/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:24.742 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:24.742 [80/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:24.742 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:24.742 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:25.001 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:25.001 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:25.001 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:25.001 [86/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:25.001 [87/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:25.001 [88/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:25.001 [89/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:25.001 [90/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:25.001 [91/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:25.262 [92/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:25.262 [93/265] Linking target lib/librte_kvargs.so.24.0 00:01:25.262 [94/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:25.262 [95/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:25.262 [96/265] Linking static target lib/librte_ring.a 00:01:25.262 [97/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:25.262 [98/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:25.262 [99/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.262 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:25.262 [101/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:25.262 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:25.262 [103/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:25.262 [104/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:25.262 [105/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:25.262 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:25.262 [107/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.262 [108/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:25.262 [109/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:25.262 [110/265] Linking static target lib/librte_eal.a 00:01:25.262 [111/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:25.262 [112/265] Linking static target lib/librte_meter.a 00:01:25.262 [113/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:25.262 [114/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:25.525 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:25.525 [116/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:25.525 [117/265] Linking target lib/librte_telemetry.so.24.0 00:01:25.525 [118/265] Linking static target lib/librte_rcu.a 00:01:25.525 [119/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:25.525 [120/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:25.525 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:25.525 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:25.525 [123/265] Linking static target lib/librte_mempool.a 00:01:25.525 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:25.525 [125/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:25.525 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:25.525 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:25.525 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:25.525 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:25.525 [130/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:25.525 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:25.788 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:25.788 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:25.788 [134/265] Linking static target lib/librte_cmdline.a 00:01:25.788 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:25.788 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:25.788 [137/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.788 [138/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:25.788 [139/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:25.788 [140/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:25.788 [141/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:25.788 [142/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:25.788 [143/265] Linking static target lib/librte_net.a 00:01:26.088 [144/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:26.088 [145/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:26.088 [146/265] Linking static target lib/librte_timer.a 00:01:26.088 [147/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.088 [148/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:26.088 [149/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:26.088 [150/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.088 [151/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:26.088 [152/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:26.088 [153/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:26.349 [154/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:26.349 [155/265] Linking static target lib/librte_dmadev.a 00:01:26.349 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:26.349 [157/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:26.349 [158/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.349 [159/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:26.349 [160/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:26.349 [161/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.349 [162/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:26.349 [163/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:26.349 [164/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:26.349 [165/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.607 [166/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:26.607 [167/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:26.607 [168/265] Linking static target lib/librte_hash.a 00:01:26.607 [169/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:26.607 [170/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:26.607 [171/265] Linking static target lib/librte_compressdev.a 00:01:26.607 [172/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:26.607 [173/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:26.607 [174/265] Linking static target lib/librte_power.a 00:01:26.607 [175/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:26.607 [176/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:26.607 [177/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:26.607 [178/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:26.607 [179/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:26.607 [180/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.607 [181/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:26.607 [182/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:26.607 [183/265] Linking static target lib/librte_reorder.a 00:01:26.607 [184/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.607 [185/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:26.607 [186/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:26.607 [187/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:26.607 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:26.865 [189/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:26.865 [190/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:26.865 [191/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:26.865 [192/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:26.865 [193/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:26.865 [194/265] Linking static target lib/librte_mbuf.a 00:01:26.865 [195/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:26.865 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:26.865 [197/265] Linking static target lib/librte_security.a 00:01:26.865 [198/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:26.865 [199/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:26.865 [200/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:26.865 [201/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.865 [202/265] Linking static target drivers/librte_bus_vdev.a 00:01:26.865 [203/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.123 [204/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.123 [205/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:27.123 [206/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:27.123 [207/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:27.123 [208/265] Linking static target drivers/librte_bus_pci.a 00:01:27.123 [209/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:27.123 [210/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:27.123 [211/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:27.123 [212/265] Linking static target drivers/librte_mempool_ring.a 00:01:27.123 [213/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.123 [214/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:27.123 [215/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.123 [216/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:27.123 [217/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.123 [218/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:27.381 [219/265] Linking static target lib/librte_ethdev.a 00:01:27.381 [220/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.381 [221/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:27.381 [222/265] Linking static target lib/librte_cryptodev.a 00:01:27.381 [223/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.316 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.691 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:31.067 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.325 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.584 [228/265] Linking target lib/librte_eal.so.24.0 00:01:31.584 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:31.584 [230/265] Linking target lib/librte_ring.so.24.0 00:01:31.584 [231/265] Linking target lib/librte_timer.so.24.0 00:01:31.584 [232/265] Linking target lib/librte_meter.so.24.0 00:01:31.584 [233/265] Linking target lib/librte_pci.so.24.0 00:01:31.584 [234/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:31.584 [235/265] Linking target lib/librte_dmadev.so.24.0 00:01:31.843 [236/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:31.843 [237/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:31.843 [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:31.843 [239/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:31.843 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:31.843 [241/265] Linking target lib/librte_rcu.so.24.0 00:01:31.843 [242/265] Linking target lib/librte_mempool.so.24.0 00:01:31.843 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:31.843 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:31.843 [245/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:31.843 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:31.843 [247/265] Linking target lib/librte_mbuf.so.24.0 00:01:32.101 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:32.101 [249/265] Linking target lib/librte_reorder.so.24.0 00:01:32.101 [250/265] Linking target lib/librte_net.so.24.0 00:01:32.101 [251/265] Linking target lib/librte_compressdev.so.24.0 00:01:32.101 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:01:32.359 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:32.359 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:32.359 [255/265] Linking target lib/librte_hash.so.24.0 00:01:32.359 [256/265] Linking target lib/librte_security.so.24.0 00:01:32.359 [257/265] Linking target lib/librte_cmdline.so.24.0 00:01:32.359 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:32.359 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:32.359 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:32.359 [261/265] Linking target lib/librte_power.so.24.0 00:01:34.883 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:34.883 [263/265] Linking static target lib/librte_vhost.a 00:01:35.815 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.072 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:36.072 INFO: autodetecting backend as ninja 00:01:36.072 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:37.004 CC lib/ut_mock/mock.o 00:01:37.004 CC lib/ut/ut.o 00:01:37.005 CC lib/log/log.o 00:01:37.005 CC lib/log/log_flags.o 00:01:37.005 CC lib/log/log_deprecated.o 00:01:37.005 LIB libspdk_ut_mock.a 00:01:37.005 SO libspdk_ut_mock.so.5.0 00:01:37.005 LIB libspdk_ut.a 00:01:37.005 LIB libspdk_log.a 00:01:37.005 SO libspdk_ut.so.1.0 00:01:37.005 SO libspdk_log.so.6.1 00:01:37.005 SYMLINK libspdk_ut_mock.so 00:01:37.005 SYMLINK libspdk_ut.so 00:01:37.005 SYMLINK libspdk_log.so 00:01:37.261 CC lib/dma/dma.o 00:01:37.261 CC lib/ioat/ioat.o 00:01:37.261 CXX lib/trace_parser/trace.o 00:01:37.261 CC lib/util/base64.o 00:01:37.261 CC lib/util/bit_array.o 00:01:37.261 CC lib/util/cpuset.o 00:01:37.261 CC lib/util/crc16.o 00:01:37.261 CC lib/util/crc32.o 00:01:37.261 CC lib/util/crc32c.o 00:01:37.261 CC lib/util/crc32_ieee.o 00:01:37.261 CC lib/util/crc64.o 00:01:37.261 CC lib/util/dif.o 00:01:37.261 CC lib/util/fd.o 00:01:37.261 CC lib/util/file.o 00:01:37.261 CC lib/util/hexlify.o 00:01:37.261 CC lib/util/iov.o 00:01:37.261 CC lib/util/math.o 00:01:37.261 CC lib/util/pipe.o 00:01:37.262 CC lib/util/strerror_tls.o 00:01:37.262 CC lib/util/string.o 00:01:37.262 CC lib/util/uuid.o 00:01:37.262 CC lib/util/fd_group.o 00:01:37.262 CC lib/util/xor.o 00:01:37.262 CC lib/util/zipf.o 00:01:37.262 CC lib/vfio_user/host/vfio_user_pci.o 00:01:37.262 CC lib/vfio_user/host/vfio_user.o 00:01:37.262 LIB libspdk_dma.a 00:01:37.262 SO libspdk_dma.so.3.0 00:01:37.519 SYMLINK libspdk_dma.so 00:01:37.519 LIB libspdk_ioat.a 00:01:37.519 SO libspdk_ioat.so.6.0 00:01:37.519 LIB libspdk_vfio_user.a 00:01:37.519 SO libspdk_vfio_user.so.4.0 00:01:37.519 SYMLINK libspdk_ioat.so 00:01:37.519 SYMLINK libspdk_vfio_user.so 00:01:37.777 LIB libspdk_util.a 00:01:37.777 SO libspdk_util.so.8.0 00:01:37.777 SYMLINK libspdk_util.so 00:01:38.035 CC lib/conf/conf.o 00:01:38.035 CC lib/idxd/idxd.o 00:01:38.035 CC lib/env_dpdk/env.o 00:01:38.035 CC lib/rdma/common.o 00:01:38.035 CC lib/vmd/vmd.o 00:01:38.035 CC lib/json/json_parse.o 00:01:38.035 CC lib/idxd/idxd_user.o 00:01:38.035 CC lib/rdma/rdma_verbs.o 00:01:38.035 CC lib/json/json_util.o 00:01:38.035 CC lib/env_dpdk/memory.o 00:01:38.035 CC lib/idxd/idxd_kernel.o 00:01:38.035 CC lib/vmd/led.o 00:01:38.035 CC lib/json/json_write.o 00:01:38.035 CC lib/env_dpdk/pci.o 00:01:38.035 CC lib/env_dpdk/init.o 00:01:38.035 CC lib/env_dpdk/threads.o 00:01:38.035 CC lib/env_dpdk/pci_ioat.o 00:01:38.035 CC lib/env_dpdk/pci_virtio.o 00:01:38.035 CC lib/env_dpdk/pci_vmd.o 00:01:38.035 CC lib/env_dpdk/pci_idxd.o 00:01:38.035 CC lib/env_dpdk/pci_event.o 00:01:38.035 CC lib/env_dpdk/sigbus_handler.o 00:01:38.035 CC lib/env_dpdk/pci_dpdk.o 00:01:38.035 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:38.035 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:38.035 LIB libspdk_trace_parser.a 00:01:38.035 SO libspdk_trace_parser.so.4.0 00:01:38.292 SYMLINK libspdk_trace_parser.so 00:01:38.292 LIB libspdk_conf.a 00:01:38.292 SO libspdk_conf.so.5.0 00:01:38.292 LIB libspdk_json.a 00:01:38.292 SYMLINK libspdk_conf.so 00:01:38.292 SO libspdk_json.so.5.1 00:01:38.292 SYMLINK libspdk_json.so 00:01:38.292 LIB libspdk_rdma.a 00:01:38.549 SO libspdk_rdma.so.5.0 00:01:38.549 SYMLINK libspdk_rdma.so 00:01:38.549 CC lib/jsonrpc/jsonrpc_server.o 00:01:38.549 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:38.549 CC lib/jsonrpc/jsonrpc_client.o 00:01:38.549 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:38.549 LIB libspdk_idxd.a 00:01:38.549 SO libspdk_idxd.so.11.0 00:01:38.549 SYMLINK libspdk_idxd.so 00:01:38.809 LIB libspdk_vmd.a 00:01:38.809 SO libspdk_vmd.so.5.0 00:01:38.809 LIB libspdk_jsonrpc.a 00:01:38.809 SO libspdk_jsonrpc.so.5.1 00:01:38.809 SYMLINK libspdk_vmd.so 00:01:38.810 SYMLINK libspdk_jsonrpc.so 00:01:39.068 CC lib/rpc/rpc.o 00:01:39.068 LIB libspdk_rpc.a 00:01:39.068 SO libspdk_rpc.so.5.0 00:01:39.326 SYMLINK libspdk_rpc.so 00:01:39.326 CC lib/notify/notify.o 00:01:39.326 CC lib/trace/trace.o 00:01:39.326 CC lib/notify/notify_rpc.o 00:01:39.326 CC lib/trace/trace_flags.o 00:01:39.326 CC lib/sock/sock.o 00:01:39.326 CC lib/trace/trace_rpc.o 00:01:39.326 CC lib/sock/sock_rpc.o 00:01:39.584 LIB libspdk_notify.a 00:01:39.584 SO libspdk_notify.so.5.0 00:01:39.584 LIB libspdk_trace.a 00:01:39.584 SYMLINK libspdk_notify.so 00:01:39.584 SO libspdk_trace.so.9.0 00:01:39.584 SYMLINK libspdk_trace.so 00:01:39.584 LIB libspdk_sock.a 00:01:39.843 SO libspdk_sock.so.8.0 00:01:39.843 CC lib/thread/thread.o 00:01:39.843 CC lib/thread/iobuf.o 00:01:39.843 SYMLINK libspdk_sock.so 00:01:39.843 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:39.843 CC lib/nvme/nvme_ctrlr.o 00:01:39.843 CC lib/nvme/nvme_fabric.o 00:01:39.843 CC lib/nvme/nvme_ns_cmd.o 00:01:39.843 CC lib/nvme/nvme_ns.o 00:01:39.843 CC lib/nvme/nvme_pcie_common.o 00:01:39.843 CC lib/nvme/nvme_pcie.o 00:01:39.843 CC lib/nvme/nvme_qpair.o 00:01:39.843 CC lib/nvme/nvme.o 00:01:39.843 CC lib/nvme/nvme_quirks.o 00:01:39.843 CC lib/nvme/nvme_transport.o 00:01:39.843 CC lib/nvme/nvme_discovery.o 00:01:39.843 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:39.843 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:39.843 CC lib/nvme/nvme_tcp.o 00:01:39.843 CC lib/nvme/nvme_opal.o 00:01:39.843 CC lib/nvme/nvme_io_msg.o 00:01:39.843 CC lib/nvme/nvme_poll_group.o 00:01:39.843 CC lib/nvme/nvme_zns.o 00:01:39.843 CC lib/nvme/nvme_vfio_user.o 00:01:39.843 CC lib/nvme/nvme_cuse.o 00:01:39.843 CC lib/nvme/nvme_rdma.o 00:01:40.101 LIB libspdk_env_dpdk.a 00:01:40.101 SO libspdk_env_dpdk.so.13.0 00:01:40.101 SYMLINK libspdk_env_dpdk.so 00:01:41.477 LIB libspdk_thread.a 00:01:41.477 SO libspdk_thread.so.9.0 00:01:41.477 SYMLINK libspdk_thread.so 00:01:41.477 CC lib/accel/accel.o 00:01:41.477 CC lib/virtio/virtio.o 00:01:41.477 CC lib/blob/blobstore.o 00:01:41.477 CC lib/virtio/virtio_vhost_user.o 00:01:41.477 CC lib/accel/accel_rpc.o 00:01:41.477 CC lib/init/json_config.o 00:01:41.477 CC lib/blob/request.o 00:01:41.477 CC lib/accel/accel_sw.o 00:01:41.477 CC lib/virtio/virtio_vfio_user.o 00:01:41.477 CC lib/blob/zeroes.o 00:01:41.477 CC lib/init/subsystem.o 00:01:41.477 CC lib/blob/blob_bs_dev.o 00:01:41.477 CC lib/virtio/virtio_pci.o 00:01:41.477 CC lib/init/subsystem_rpc.o 00:01:41.477 CC lib/init/rpc.o 00:01:41.736 LIB libspdk_init.a 00:01:41.736 SO libspdk_init.so.4.0 00:01:41.994 SYMLINK libspdk_init.so 00:01:41.994 LIB libspdk_virtio.a 00:01:41.994 SO libspdk_virtio.so.6.0 00:01:41.994 SYMLINK libspdk_virtio.so 00:01:41.994 CC lib/event/app.o 00:01:41.994 CC lib/event/reactor.o 00:01:41.994 CC lib/event/log_rpc.o 00:01:41.994 CC lib/event/app_rpc.o 00:01:41.994 CC lib/event/scheduler_static.o 00:01:42.253 LIB libspdk_nvme.a 00:01:42.253 SO libspdk_nvme.so.12.0 00:01:42.253 LIB libspdk_event.a 00:01:42.512 SO libspdk_event.so.12.0 00:01:42.512 SYMLINK libspdk_event.so 00:01:42.512 SYMLINK libspdk_nvme.so 00:01:42.512 LIB libspdk_accel.a 00:01:42.512 SO libspdk_accel.so.14.0 00:01:42.770 SYMLINK libspdk_accel.so 00:01:42.770 CC lib/bdev/bdev.o 00:01:42.770 CC lib/bdev/bdev_rpc.o 00:01:42.770 CC lib/bdev/bdev_zone.o 00:01:42.770 CC lib/bdev/part.o 00:01:42.770 CC lib/bdev/scsi_nvme.o 00:01:44.144 LIB libspdk_blob.a 00:01:44.144 SO libspdk_blob.so.10.1 00:01:44.402 SYMLINK libspdk_blob.so 00:01:44.402 CC lib/lvol/lvol.o 00:01:44.402 CC lib/blobfs/blobfs.o 00:01:44.402 CC lib/blobfs/tree.o 00:01:45.336 LIB libspdk_blobfs.a 00:01:45.336 SO libspdk_blobfs.so.9.0 00:01:45.336 LIB libspdk_lvol.a 00:01:45.336 SYMLINK libspdk_blobfs.so 00:01:45.336 SO libspdk_lvol.so.9.1 00:01:45.336 SYMLINK libspdk_lvol.so 00:01:45.594 LIB libspdk_bdev.a 00:01:45.594 SO libspdk_bdev.so.14.0 00:01:45.864 SYMLINK libspdk_bdev.so 00:01:45.864 CC lib/ublk/ublk.o 00:01:45.864 CC lib/ftl/ftl_core.o 00:01:45.864 CC lib/ublk/ublk_rpc.o 00:01:45.864 CC lib/nbd/nbd.o 00:01:45.864 CC lib/nvmf/ctrlr.o 00:01:45.864 CC lib/ftl/ftl_init.o 00:01:45.864 CC lib/ftl/ftl_layout.o 00:01:45.864 CC lib/nbd/nbd_rpc.o 00:01:45.864 CC lib/scsi/dev.o 00:01:45.864 CC lib/nvmf/ctrlr_discovery.o 00:01:45.864 CC lib/ftl/ftl_debug.o 00:01:45.864 CC lib/scsi/lun.o 00:01:45.864 CC lib/nvmf/ctrlr_bdev.o 00:01:45.864 CC lib/ftl/ftl_io.o 00:01:45.864 CC lib/nvmf/subsystem.o 00:01:45.864 CC lib/scsi/port.o 00:01:45.864 CC lib/scsi/scsi.o 00:01:45.864 CC lib/nvmf/nvmf.o 00:01:45.864 CC lib/ftl/ftl_sb.o 00:01:45.864 CC lib/scsi/scsi_bdev.o 00:01:45.864 CC lib/ftl/ftl_l2p.o 00:01:45.864 CC lib/scsi/scsi_pr.o 00:01:45.864 CC lib/nvmf/transport.o 00:01:45.864 CC lib/nvmf/nvmf_rpc.o 00:01:45.864 CC lib/scsi/scsi_rpc.o 00:01:45.864 CC lib/nvmf/tcp.o 00:01:45.864 CC lib/ftl/ftl_l2p_flat.o 00:01:45.864 CC lib/scsi/task.o 00:01:45.864 CC lib/nvmf/rdma.o 00:01:45.864 CC lib/ftl/ftl_nv_cache.o 00:01:45.864 CC lib/ftl/ftl_band.o 00:01:45.864 CC lib/ftl/ftl_band_ops.o 00:01:45.864 CC lib/ftl/ftl_writer.o 00:01:45.864 CC lib/ftl/ftl_rq.o 00:01:45.864 CC lib/ftl/ftl_reloc.o 00:01:45.864 CC lib/ftl/ftl_l2p_cache.o 00:01:45.864 CC lib/ftl/ftl_p2l.o 00:01:45.864 CC lib/ftl/mngt/ftl_mngt.o 00:01:45.864 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:45.864 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:45.864 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:45.864 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:45.864 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:45.864 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:45.864 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:45.864 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:45.864 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:45.864 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:46.123 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:46.123 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:46.123 CC lib/ftl/utils/ftl_conf.o 00:01:46.123 CC lib/ftl/utils/ftl_md.o 00:01:46.123 CC lib/ftl/utils/ftl_mempool.o 00:01:46.123 CC lib/ftl/utils/ftl_bitmap.o 00:01:46.381 CC lib/ftl/utils/ftl_property.o 00:01:46.381 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:46.381 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:46.381 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:46.381 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:46.381 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:46.381 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:46.381 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:46.381 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:46.381 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:46.381 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:46.381 CC lib/ftl/base/ftl_base_dev.o 00:01:46.381 CC lib/ftl/base/ftl_base_bdev.o 00:01:46.382 CC lib/ftl/ftl_trace.o 00:01:46.640 LIB libspdk_nbd.a 00:01:46.640 SO libspdk_nbd.so.6.0 00:01:46.640 LIB libspdk_scsi.a 00:01:46.640 SYMLINK libspdk_nbd.so 00:01:46.640 SO libspdk_scsi.so.8.0 00:01:46.898 SYMLINK libspdk_scsi.so 00:01:46.898 LIB libspdk_ublk.a 00:01:46.898 SO libspdk_ublk.so.2.0 00:01:46.898 SYMLINK libspdk_ublk.so 00:01:46.898 CC lib/vhost/vhost.o 00:01:46.898 CC lib/iscsi/conn.o 00:01:46.898 CC lib/vhost/vhost_rpc.o 00:01:46.898 CC lib/iscsi/init_grp.o 00:01:46.898 CC lib/vhost/vhost_scsi.o 00:01:46.898 CC lib/iscsi/iscsi.o 00:01:46.898 CC lib/vhost/vhost_blk.o 00:01:46.898 CC lib/iscsi/md5.o 00:01:46.898 CC lib/vhost/rte_vhost_user.o 00:01:46.898 CC lib/iscsi/param.o 00:01:46.898 CC lib/iscsi/portal_grp.o 00:01:46.898 CC lib/iscsi/tgt_node.o 00:01:46.898 CC lib/iscsi/iscsi_subsystem.o 00:01:46.898 CC lib/iscsi/iscsi_rpc.o 00:01:46.898 CC lib/iscsi/task.o 00:01:47.156 LIB libspdk_ftl.a 00:01:47.434 SO libspdk_ftl.so.8.0 00:01:47.698 SYMLINK libspdk_ftl.so 00:01:48.263 LIB libspdk_vhost.a 00:01:48.263 SO libspdk_vhost.so.7.1 00:01:48.263 SYMLINK libspdk_vhost.so 00:01:48.263 LIB libspdk_nvmf.a 00:01:48.263 LIB libspdk_iscsi.a 00:01:48.522 SO libspdk_nvmf.so.17.0 00:01:48.522 SO libspdk_iscsi.so.7.0 00:01:48.522 SYMLINK libspdk_nvmf.so 00:01:48.522 SYMLINK libspdk_iscsi.so 00:01:48.780 CC module/env_dpdk/env_dpdk_rpc.o 00:01:48.780 CC module/sock/posix/posix.o 00:01:48.780 CC module/accel/iaa/accel_iaa.o 00:01:48.780 CC module/accel/ioat/accel_ioat.o 00:01:48.780 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:48.780 CC module/scheduler/gscheduler/gscheduler.o 00:01:48.780 CC module/accel/iaa/accel_iaa_rpc.o 00:01:48.780 CC module/blob/bdev/blob_bdev.o 00:01:48.780 CC module/accel/ioat/accel_ioat_rpc.o 00:01:48.780 CC module/accel/dsa/accel_dsa.o 00:01:48.780 CC module/accel/dsa/accel_dsa_rpc.o 00:01:48.780 CC module/accel/error/accel_error.o 00:01:48.780 CC module/accel/error/accel_error_rpc.o 00:01:48.780 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:49.038 LIB libspdk_env_dpdk_rpc.a 00:01:49.038 SO libspdk_env_dpdk_rpc.so.5.0 00:01:49.038 SYMLINK libspdk_env_dpdk_rpc.so 00:01:49.038 LIB libspdk_scheduler_gscheduler.a 00:01:49.038 LIB libspdk_scheduler_dpdk_governor.a 00:01:49.038 SO libspdk_scheduler_gscheduler.so.3.0 00:01:49.038 SO libspdk_scheduler_dpdk_governor.so.3.0 00:01:49.038 LIB libspdk_accel_error.a 00:01:49.038 LIB libspdk_accel_ioat.a 00:01:49.038 LIB libspdk_scheduler_dynamic.a 00:01:49.038 LIB libspdk_accel_iaa.a 00:01:49.038 SO libspdk_accel_error.so.1.0 00:01:49.038 SO libspdk_accel_ioat.so.5.0 00:01:49.038 SO libspdk_scheduler_dynamic.so.3.0 00:01:49.038 SYMLINK libspdk_scheduler_gscheduler.so 00:01:49.038 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:49.038 SO libspdk_accel_iaa.so.2.0 00:01:49.038 LIB libspdk_accel_dsa.a 00:01:49.038 SYMLINK libspdk_scheduler_dynamic.so 00:01:49.038 SYMLINK libspdk_accel_error.so 00:01:49.038 SO libspdk_accel_dsa.so.4.0 00:01:49.038 SYMLINK libspdk_accel_ioat.so 00:01:49.038 LIB libspdk_blob_bdev.a 00:01:49.038 SYMLINK libspdk_accel_iaa.so 00:01:49.038 SO libspdk_blob_bdev.so.10.1 00:01:49.301 SYMLINK libspdk_accel_dsa.so 00:01:49.301 SYMLINK libspdk_blob_bdev.so 00:01:49.301 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:49.301 CC module/bdev/null/bdev_null.o 00:01:49.301 CC module/bdev/malloc/bdev_malloc.o 00:01:49.301 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:49.301 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:49.301 CC module/blobfs/bdev/blobfs_bdev.o 00:01:49.301 CC module/bdev/delay/vbdev_delay.o 00:01:49.301 CC module/bdev/lvol/vbdev_lvol.o 00:01:49.301 CC module/bdev/gpt/gpt.o 00:01:49.301 CC module/bdev/ftl/bdev_ftl.o 00:01:49.301 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:49.301 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:49.301 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:49.301 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:49.301 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:49.301 CC module/bdev/nvme/bdev_nvme.o 00:01:49.301 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:49.301 CC module/bdev/iscsi/bdev_iscsi.o 00:01:49.301 CC module/bdev/null/bdev_null_rpc.o 00:01:49.301 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:49.301 CC module/bdev/gpt/vbdev_gpt.o 00:01:49.301 CC module/bdev/error/vbdev_error.o 00:01:49.301 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:49.301 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:49.301 CC module/bdev/nvme/nvme_rpc.o 00:01:49.301 CC module/bdev/nvme/bdev_mdns_client.o 00:01:49.301 CC module/bdev/passthru/vbdev_passthru.o 00:01:49.301 CC module/bdev/raid/bdev_raid.o 00:01:49.301 CC module/bdev/error/vbdev_error_rpc.o 00:01:49.301 CC module/bdev/aio/bdev_aio.o 00:01:49.301 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:49.301 CC module/bdev/nvme/vbdev_opal.o 00:01:49.301 CC module/bdev/raid/bdev_raid_rpc.o 00:01:49.301 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:49.301 CC module/bdev/raid/bdev_raid_sb.o 00:01:49.301 CC module/bdev/aio/bdev_aio_rpc.o 00:01:49.301 CC module/bdev/split/vbdev_split.o 00:01:49.301 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:49.301 CC module/bdev/raid/raid0.o 00:01:49.301 CC module/bdev/split/vbdev_split_rpc.o 00:01:49.301 CC module/bdev/raid/raid1.o 00:01:49.301 CC module/bdev/raid/concat.o 00:01:49.869 LIB libspdk_sock_posix.a 00:01:49.869 LIB libspdk_blobfs_bdev.a 00:01:49.869 SO libspdk_sock_posix.so.5.0 00:01:49.869 SO libspdk_blobfs_bdev.so.5.0 00:01:49.869 SYMLINK libspdk_blobfs_bdev.so 00:01:49.869 LIB libspdk_bdev_error.a 00:01:49.869 LIB libspdk_bdev_null.a 00:01:49.869 SYMLINK libspdk_sock_posix.so 00:01:49.869 LIB libspdk_bdev_split.a 00:01:49.869 LIB libspdk_bdev_passthru.a 00:01:49.869 SO libspdk_bdev_error.so.5.0 00:01:49.869 SO libspdk_bdev_null.so.5.0 00:01:49.869 SO libspdk_bdev_split.so.5.0 00:01:49.869 SO libspdk_bdev_passthru.so.5.0 00:01:49.869 LIB libspdk_bdev_ftl.a 00:01:49.869 LIB libspdk_bdev_gpt.a 00:01:49.869 SO libspdk_bdev_ftl.so.5.0 00:01:49.869 SO libspdk_bdev_gpt.so.5.0 00:01:49.869 SYMLINK libspdk_bdev_error.so 00:01:49.869 LIB libspdk_bdev_malloc.a 00:01:49.869 LIB libspdk_bdev_zone_block.a 00:01:49.869 LIB libspdk_bdev_aio.a 00:01:49.869 SYMLINK libspdk_bdev_split.so 00:01:49.869 SYMLINK libspdk_bdev_null.so 00:01:49.869 SYMLINK libspdk_bdev_passthru.so 00:01:49.869 SO libspdk_bdev_malloc.so.5.0 00:01:49.869 SO libspdk_bdev_aio.so.5.0 00:01:49.869 SO libspdk_bdev_zone_block.so.5.0 00:01:49.869 LIB libspdk_bdev_iscsi.a 00:01:49.869 LIB libspdk_bdev_virtio.a 00:01:49.869 LIB libspdk_bdev_delay.a 00:01:49.869 SYMLINK libspdk_bdev_gpt.so 00:01:49.869 SYMLINK libspdk_bdev_ftl.so 00:01:49.869 SO libspdk_bdev_iscsi.so.5.0 00:01:49.869 SO libspdk_bdev_delay.so.5.0 00:01:49.869 SO libspdk_bdev_virtio.so.5.0 00:01:49.869 SYMLINK libspdk_bdev_aio.so 00:01:49.869 SYMLINK libspdk_bdev_malloc.so 00:01:49.869 SYMLINK libspdk_bdev_zone_block.so 00:01:50.128 SYMLINK libspdk_bdev_delay.so 00:01:50.128 SYMLINK libspdk_bdev_iscsi.so 00:01:50.128 SYMLINK libspdk_bdev_virtio.so 00:01:50.128 LIB libspdk_bdev_lvol.a 00:01:50.128 SO libspdk_bdev_lvol.so.5.0 00:01:50.128 SYMLINK libspdk_bdev_lvol.so 00:01:50.386 LIB libspdk_bdev_raid.a 00:01:50.386 SO libspdk_bdev_raid.so.5.0 00:01:50.644 SYMLINK libspdk_bdev_raid.so 00:01:51.578 LIB libspdk_bdev_nvme.a 00:01:51.578 SO libspdk_bdev_nvme.so.6.0 00:01:51.578 SYMLINK libspdk_bdev_nvme.so 00:01:51.848 CC module/event/subsystems/vmd/vmd.o 00:01:51.848 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:51.848 CC module/event/subsystems/scheduler/scheduler.o 00:01:51.848 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:51.848 CC module/event/subsystems/sock/sock.o 00:01:51.848 CC module/event/subsystems/iobuf/iobuf.o 00:01:51.848 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:51.848 LIB libspdk_event_sock.a 00:01:51.848 LIB libspdk_event_vhost_blk.a 00:01:52.106 LIB libspdk_event_scheduler.a 00:01:52.106 LIB libspdk_event_vmd.a 00:01:52.106 LIB libspdk_event_iobuf.a 00:01:52.106 SO libspdk_event_sock.so.4.0 00:01:52.106 SO libspdk_event_vhost_blk.so.2.0 00:01:52.106 SO libspdk_event_scheduler.so.3.0 00:01:52.106 SO libspdk_event_vmd.so.5.0 00:01:52.106 SO libspdk_event_iobuf.so.2.0 00:01:52.106 SYMLINK libspdk_event_sock.so 00:01:52.107 SYMLINK libspdk_event_vhost_blk.so 00:01:52.107 SYMLINK libspdk_event_scheduler.so 00:01:52.107 SYMLINK libspdk_event_vmd.so 00:01:52.107 SYMLINK libspdk_event_iobuf.so 00:01:52.107 CC module/event/subsystems/accel/accel.o 00:01:52.365 LIB libspdk_event_accel.a 00:01:52.365 SO libspdk_event_accel.so.5.0 00:01:52.365 SYMLINK libspdk_event_accel.so 00:01:52.623 CC module/event/subsystems/bdev/bdev.o 00:01:52.623 LIB libspdk_event_bdev.a 00:01:52.623 SO libspdk_event_bdev.so.5.0 00:01:52.881 SYMLINK libspdk_event_bdev.so 00:01:52.881 CC module/event/subsystems/ublk/ublk.o 00:01:52.881 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:52.881 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:52.881 CC module/event/subsystems/nbd/nbd.o 00:01:52.881 CC module/event/subsystems/scsi/scsi.o 00:01:53.141 LIB libspdk_event_ublk.a 00:01:53.141 LIB libspdk_event_nbd.a 00:01:53.141 LIB libspdk_event_scsi.a 00:01:53.141 SO libspdk_event_ublk.so.2.0 00:01:53.141 SO libspdk_event_nbd.so.5.0 00:01:53.141 SO libspdk_event_scsi.so.5.0 00:01:53.141 SYMLINK libspdk_event_ublk.so 00:01:53.141 SYMLINK libspdk_event_nbd.so 00:01:53.141 LIB libspdk_event_nvmf.a 00:01:53.141 SYMLINK libspdk_event_scsi.so 00:01:53.141 SO libspdk_event_nvmf.so.5.0 00:01:53.141 SYMLINK libspdk_event_nvmf.so 00:01:53.141 CC module/event/subsystems/iscsi/iscsi.o 00:01:53.141 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:53.400 LIB libspdk_event_vhost_scsi.a 00:01:53.400 LIB libspdk_event_iscsi.a 00:01:53.400 SO libspdk_event_vhost_scsi.so.2.0 00:01:53.400 SO libspdk_event_iscsi.so.5.0 00:01:53.400 SYMLINK libspdk_event_vhost_scsi.so 00:01:53.400 SYMLINK libspdk_event_iscsi.so 00:01:53.663 SO libspdk.so.5.0 00:01:53.663 SYMLINK libspdk.so 00:01:53.663 CXX app/trace/trace.o 00:01:53.663 CC app/spdk_top/spdk_top.o 00:01:53.663 CC app/spdk_nvme_discover/discovery_aer.o 00:01:53.663 CC app/trace_record/trace_record.o 00:01:53.663 CC app/spdk_nvme_perf/perf.o 00:01:53.663 CC app/spdk_lspci/spdk_lspci.o 00:01:53.663 TEST_HEADER include/spdk/accel.h 00:01:53.663 TEST_HEADER include/spdk/accel_module.h 00:01:53.663 TEST_HEADER include/spdk/assert.h 00:01:53.663 TEST_HEADER include/spdk/barrier.h 00:01:53.663 CC app/spdk_nvme_identify/identify.o 00:01:53.663 TEST_HEADER include/spdk/base64.h 00:01:53.663 CC test/rpc_client/rpc_client_test.o 00:01:53.663 TEST_HEADER include/spdk/bdev.h 00:01:53.663 TEST_HEADER include/spdk/bdev_module.h 00:01:53.663 TEST_HEADER include/spdk/bdev_zone.h 00:01:53.663 TEST_HEADER include/spdk/bit_array.h 00:01:53.663 TEST_HEADER include/spdk/bit_pool.h 00:01:53.663 TEST_HEADER include/spdk/blob_bdev.h 00:01:53.663 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:53.663 TEST_HEADER include/spdk/blobfs.h 00:01:53.663 TEST_HEADER include/spdk/blob.h 00:01:53.663 TEST_HEADER include/spdk/conf.h 00:01:53.663 TEST_HEADER include/spdk/config.h 00:01:53.663 TEST_HEADER include/spdk/cpuset.h 00:01:53.663 TEST_HEADER include/spdk/crc16.h 00:01:53.663 TEST_HEADER include/spdk/crc32.h 00:01:53.663 TEST_HEADER include/spdk/crc64.h 00:01:53.663 TEST_HEADER include/spdk/dif.h 00:01:53.663 TEST_HEADER include/spdk/dma.h 00:01:53.663 TEST_HEADER include/spdk/endian.h 00:01:53.663 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:53.663 TEST_HEADER include/spdk/env_dpdk.h 00:01:53.663 CC app/iscsi_tgt/iscsi_tgt.o 00:01:53.663 CC app/spdk_dd/spdk_dd.o 00:01:53.663 TEST_HEADER include/spdk/env.h 00:01:53.663 TEST_HEADER include/spdk/event.h 00:01:53.663 CC app/nvmf_tgt/nvmf_main.o 00:01:53.663 TEST_HEADER include/spdk/fd_group.h 00:01:53.663 CC app/vhost/vhost.o 00:01:53.663 TEST_HEADER include/spdk/fd.h 00:01:53.663 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:53.663 CC examples/ioat/perf/perf.o 00:01:53.663 TEST_HEADER include/spdk/file.h 00:01:53.663 CC examples/nvme/reconnect/reconnect.o 00:01:53.663 TEST_HEADER include/spdk/ftl.h 00:01:53.663 CC test/app/jsoncat/jsoncat.o 00:01:53.663 CC examples/nvme/hello_world/hello_world.o 00:01:53.663 CC test/app/histogram_perf/histogram_perf.o 00:01:53.924 TEST_HEADER include/spdk/gpt_spec.h 00:01:53.924 CC examples/ioat/verify/verify.o 00:01:53.924 CC examples/nvme/hotplug/hotplug.o 00:01:53.924 CC test/event/event_perf/event_perf.o 00:01:53.924 TEST_HEADER include/spdk/hexlify.h 00:01:53.924 CC examples/sock/hello_world/hello_sock.o 00:01:53.924 CC examples/idxd/perf/perf.o 00:01:53.924 CC examples/util/zipf/zipf.o 00:01:53.924 CC examples/vmd/lsvmd/lsvmd.o 00:01:53.924 CC test/event/reactor/reactor.o 00:01:53.924 TEST_HEADER include/spdk/histogram_data.h 00:01:53.924 CC test/thread/poller_perf/poller_perf.o 00:01:53.924 CC app/fio/nvme/fio_plugin.o 00:01:53.924 CC examples/nvme/arbitration/arbitration.o 00:01:53.924 TEST_HEADER include/spdk/idxd.h 00:01:53.924 CC test/nvme/aer/aer.o 00:01:53.924 CC test/app/stub/stub.o 00:01:53.924 TEST_HEADER include/spdk/idxd_spec.h 00:01:53.924 CC examples/accel/perf/accel_perf.o 00:01:53.924 TEST_HEADER include/spdk/init.h 00:01:53.924 CC app/spdk_tgt/spdk_tgt.o 00:01:53.924 TEST_HEADER include/spdk/ioat.h 00:01:53.924 TEST_HEADER include/spdk/ioat_spec.h 00:01:53.924 TEST_HEADER include/spdk/iscsi_spec.h 00:01:53.924 TEST_HEADER include/spdk/json.h 00:01:53.924 TEST_HEADER include/spdk/jsonrpc.h 00:01:53.924 TEST_HEADER include/spdk/likely.h 00:01:53.924 TEST_HEADER include/spdk/log.h 00:01:53.924 TEST_HEADER include/spdk/lvol.h 00:01:53.924 TEST_HEADER include/spdk/memory.h 00:01:53.924 TEST_HEADER include/spdk/mmio.h 00:01:53.924 CC test/dma/test_dma/test_dma.o 00:01:53.924 CC examples/blob/hello_world/hello_blob.o 00:01:53.924 TEST_HEADER include/spdk/nbd.h 00:01:53.924 CC examples/bdev/hello_world/hello_bdev.o 00:01:53.924 CC examples/bdev/bdevperf/bdevperf.o 00:01:53.924 CC examples/thread/thread/thread_ex.o 00:01:53.924 TEST_HEADER include/spdk/notify.h 00:01:53.924 CC test/bdev/bdevio/bdevio.o 00:01:53.924 CC test/blobfs/mkfs/mkfs.o 00:01:53.924 CC test/accel/dif/dif.o 00:01:53.924 TEST_HEADER include/spdk/nvme.h 00:01:53.924 TEST_HEADER include/spdk/nvme_intel.h 00:01:53.924 CC examples/nvmf/nvmf/nvmf.o 00:01:53.924 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:53.924 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:53.924 TEST_HEADER include/spdk/nvme_spec.h 00:01:53.924 TEST_HEADER include/spdk/nvme_zns.h 00:01:53.924 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:53.924 CC test/app/bdev_svc/bdev_svc.o 00:01:53.924 CC test/env/mem_callbacks/mem_callbacks.o 00:01:53.924 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:53.924 TEST_HEADER include/spdk/nvmf.h 00:01:53.924 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:53.924 TEST_HEADER include/spdk/nvmf_spec.h 00:01:53.924 CC test/lvol/esnap/esnap.o 00:01:53.924 TEST_HEADER include/spdk/nvmf_transport.h 00:01:53.924 TEST_HEADER include/spdk/opal.h 00:01:53.924 TEST_HEADER include/spdk/opal_spec.h 00:01:53.924 TEST_HEADER include/spdk/pci_ids.h 00:01:53.924 TEST_HEADER include/spdk/pipe.h 00:01:53.924 TEST_HEADER include/spdk/queue.h 00:01:53.924 TEST_HEADER include/spdk/reduce.h 00:01:53.924 TEST_HEADER include/spdk/rpc.h 00:01:53.924 TEST_HEADER include/spdk/scheduler.h 00:01:53.924 TEST_HEADER include/spdk/scsi.h 00:01:53.924 TEST_HEADER include/spdk/scsi_spec.h 00:01:53.924 TEST_HEADER include/spdk/sock.h 00:01:53.924 TEST_HEADER include/spdk/stdinc.h 00:01:53.924 TEST_HEADER include/spdk/string.h 00:01:53.924 TEST_HEADER include/spdk/thread.h 00:01:53.924 TEST_HEADER include/spdk/trace.h 00:01:53.924 TEST_HEADER include/spdk/trace_parser.h 00:01:53.924 TEST_HEADER include/spdk/tree.h 00:01:53.924 TEST_HEADER include/spdk/ublk.h 00:01:53.924 LINK spdk_lspci 00:01:53.924 TEST_HEADER include/spdk/util.h 00:01:53.924 TEST_HEADER include/spdk/uuid.h 00:01:53.924 TEST_HEADER include/spdk/version.h 00:01:53.924 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:53.924 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:53.924 TEST_HEADER include/spdk/vhost.h 00:01:53.924 TEST_HEADER include/spdk/vmd.h 00:01:53.924 TEST_HEADER include/spdk/xor.h 00:01:53.924 TEST_HEADER include/spdk/zipf.h 00:01:53.924 CXX test/cpp_headers/accel.o 00:01:54.190 LINK spdk_nvme_discover 00:01:54.190 LINK jsoncat 00:01:54.190 LINK lsvmd 00:01:54.190 LINK rpc_client_test 00:01:54.190 LINK reactor 00:01:54.190 LINK histogram_perf 00:01:54.190 LINK event_perf 00:01:54.190 LINK poller_perf 00:01:54.190 LINK zipf 00:01:54.190 LINK interrupt_tgt 00:01:54.190 LINK nvmf_tgt 00:01:54.190 LINK vhost 00:01:54.190 LINK stub 00:01:54.190 LINK iscsi_tgt 00:01:54.190 LINK spdk_trace_record 00:01:54.190 LINK ioat_perf 00:01:54.190 LINK spdk_tgt 00:01:54.190 LINK verify 00:01:54.190 LINK mkfs 00:01:54.190 LINK hello_world 00:01:54.190 LINK bdev_svc 00:01:54.190 LINK hotplug 00:01:54.190 LINK hello_sock 00:01:54.190 CXX test/cpp_headers/accel_module.o 00:01:54.190 LINK hello_bdev 00:01:54.457 LINK hello_blob 00:01:54.457 LINK thread 00:01:54.457 LINK aer 00:01:54.457 CXX test/cpp_headers/assert.o 00:01:54.457 LINK arbitration 00:01:54.457 LINK reconnect 00:01:54.457 LINK idxd_perf 00:01:54.457 LINK spdk_dd 00:01:54.457 CC test/env/vtophys/vtophys.o 00:01:54.457 CC examples/vmd/led/led.o 00:01:54.457 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:54.457 LINK nvmf 00:01:54.457 CXX test/cpp_headers/barrier.o 00:01:54.457 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:54.457 CC examples/nvme/abort/abort.o 00:01:54.457 CXX test/cpp_headers/base64.o 00:01:54.457 CC test/event/reactor_perf/reactor_perf.o 00:01:54.457 LINK spdk_trace 00:01:54.457 CC test/nvme/reset/reset.o 00:01:54.457 LINK test_dma 00:01:54.457 LINK dif 00:01:54.457 CC examples/blob/cli/blobcli.o 00:01:54.457 CXX test/cpp_headers/bdev.o 00:01:54.457 CC app/fio/bdev/fio_plugin.o 00:01:54.457 LINK bdevio 00:01:54.721 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:54.721 CC test/nvme/sgl/sgl.o 00:01:54.721 CC test/nvme/e2edp/nvme_dp.o 00:01:54.721 CXX test/cpp_headers/bdev_module.o 00:01:54.721 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:54.721 CC test/env/memory/memory_ut.o 00:01:54.721 LINK nvme_manage 00:01:54.721 LINK accel_perf 00:01:54.721 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:54.721 LINK nvme_fuzz 00:01:54.721 CC test/event/app_repeat/app_repeat.o 00:01:54.721 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:54.721 CXX test/cpp_headers/bdev_zone.o 00:01:54.721 CC test/nvme/err_injection/err_injection.o 00:01:54.721 CC test/env/pci/pci_ut.o 00:01:54.721 CC test/nvme/overhead/overhead.o 00:01:54.721 CC test/event/scheduler/scheduler.o 00:01:54.721 CXX test/cpp_headers/bit_array.o 00:01:54.721 LINK vtophys 00:01:54.721 LINK led 00:01:54.721 LINK reactor_perf 00:01:54.721 CC test/nvme/startup/startup.o 00:01:54.721 LINK spdk_nvme 00:01:54.980 CC test/nvme/reserve/reserve.o 00:01:54.980 CC test/nvme/simple_copy/simple_copy.o 00:01:54.980 LINK cmb_copy 00:01:54.980 CXX test/cpp_headers/bit_pool.o 00:01:54.980 CC test/nvme/connect_stress/connect_stress.o 00:01:54.980 CXX test/cpp_headers/blob_bdev.o 00:01:54.980 CXX test/cpp_headers/blobfs_bdev.o 00:01:54.980 CC test/nvme/boot_partition/boot_partition.o 00:01:54.980 CXX test/cpp_headers/blobfs.o 00:01:54.980 LINK env_dpdk_post_init 00:01:54.980 CXX test/cpp_headers/blob.o 00:01:54.980 CC test/nvme/compliance/nvme_compliance.o 00:01:54.980 CXX test/cpp_headers/conf.o 00:01:54.980 CXX test/cpp_headers/config.o 00:01:54.980 LINK pmr_persistence 00:01:54.980 CC test/nvme/fused_ordering/fused_ordering.o 00:01:54.980 LINK reset 00:01:54.980 CC test/nvme/fdp/fdp.o 00:01:54.980 CXX test/cpp_headers/cpuset.o 00:01:54.980 LINK app_repeat 00:01:54.980 CXX test/cpp_headers/crc16.o 00:01:54.980 CXX test/cpp_headers/crc32.o 00:01:54.980 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:54.980 LINK mem_callbacks 00:01:54.980 CXX test/cpp_headers/crc64.o 00:01:54.980 CXX test/cpp_headers/dif.o 00:01:54.980 CXX test/cpp_headers/dma.o 00:01:55.242 CXX test/cpp_headers/endian.o 00:01:55.242 CC test/nvme/cuse/cuse.o 00:01:55.242 CXX test/cpp_headers/env_dpdk.o 00:01:55.242 LINK err_injection 00:01:55.242 LINK sgl 00:01:55.242 LINK startup 00:01:55.242 CXX test/cpp_headers/env.o 00:01:55.242 CXX test/cpp_headers/event.o 00:01:55.242 LINK nvme_dp 00:01:55.242 LINK spdk_nvme_perf 00:01:55.242 LINK scheduler 00:01:55.242 LINK spdk_nvme_identify 00:01:55.242 LINK boot_partition 00:01:55.242 CXX test/cpp_headers/fd_group.o 00:01:55.242 LINK abort 00:01:55.242 CXX test/cpp_headers/fd.o 00:01:55.242 LINK connect_stress 00:01:55.242 CXX test/cpp_headers/file.o 00:01:55.242 LINK reserve 00:01:55.242 CXX test/cpp_headers/ftl.o 00:01:55.242 LINK bdevperf 00:01:55.242 CXX test/cpp_headers/gpt_spec.o 00:01:55.242 LINK simple_copy 00:01:55.242 CXX test/cpp_headers/hexlify.o 00:01:55.242 CXX test/cpp_headers/histogram_data.o 00:01:55.242 LINK overhead 00:01:55.242 CXX test/cpp_headers/idxd.o 00:01:55.505 CXX test/cpp_headers/idxd_spec.o 00:01:55.505 CXX test/cpp_headers/init.o 00:01:55.505 LINK spdk_top 00:01:55.505 CXX test/cpp_headers/ioat.o 00:01:55.505 CXX test/cpp_headers/ioat_spec.o 00:01:55.505 CXX test/cpp_headers/iscsi_spec.o 00:01:55.505 CXX test/cpp_headers/json.o 00:01:55.505 CXX test/cpp_headers/jsonrpc.o 00:01:55.505 LINK doorbell_aers 00:01:55.505 LINK pci_ut 00:01:55.505 CXX test/cpp_headers/likely.o 00:01:55.505 LINK fused_ordering 00:01:55.505 CXX test/cpp_headers/log.o 00:01:55.505 CXX test/cpp_headers/lvol.o 00:01:55.505 LINK blobcli 00:01:55.505 CXX test/cpp_headers/memory.o 00:01:55.505 LINK vhost_fuzz 00:01:55.505 CXX test/cpp_headers/mmio.o 00:01:55.505 CXX test/cpp_headers/nbd.o 00:01:55.505 CXX test/cpp_headers/notify.o 00:01:55.505 LINK spdk_bdev 00:01:55.505 CXX test/cpp_headers/nvme.o 00:01:55.505 CXX test/cpp_headers/nvme_intel.o 00:01:55.505 CXX test/cpp_headers/nvme_ocssd.o 00:01:55.505 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:55.505 CXX test/cpp_headers/nvme_spec.o 00:01:55.505 CXX test/cpp_headers/nvme_zns.o 00:01:55.505 CXX test/cpp_headers/nvmf_cmd.o 00:01:55.505 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:55.505 CXX test/cpp_headers/nvmf.o 00:01:55.505 CXX test/cpp_headers/nvmf_spec.o 00:01:55.505 CXX test/cpp_headers/nvmf_transport.o 00:01:55.505 CXX test/cpp_headers/opal.o 00:01:55.505 CXX test/cpp_headers/opal_spec.o 00:01:55.505 LINK nvme_compliance 00:01:55.505 CXX test/cpp_headers/pci_ids.o 00:01:55.505 CXX test/cpp_headers/pipe.o 00:01:55.763 CXX test/cpp_headers/queue.o 00:01:55.763 CXX test/cpp_headers/reduce.o 00:01:55.763 CXX test/cpp_headers/rpc.o 00:01:55.763 LINK fdp 00:01:55.763 CXX test/cpp_headers/scheduler.o 00:01:55.763 CXX test/cpp_headers/scsi.o 00:01:55.763 CXX test/cpp_headers/scsi_spec.o 00:01:55.763 CXX test/cpp_headers/sock.o 00:01:55.763 CXX test/cpp_headers/stdinc.o 00:01:55.763 CXX test/cpp_headers/string.o 00:01:55.763 CXX test/cpp_headers/thread.o 00:01:55.763 CXX test/cpp_headers/trace.o 00:01:55.763 CXX test/cpp_headers/trace_parser.o 00:01:55.763 CXX test/cpp_headers/ublk.o 00:01:55.763 CXX test/cpp_headers/tree.o 00:01:55.763 CXX test/cpp_headers/util.o 00:01:55.763 CXX test/cpp_headers/uuid.o 00:01:55.763 CXX test/cpp_headers/version.o 00:01:55.763 CXX test/cpp_headers/vfio_user_pci.o 00:01:55.763 CXX test/cpp_headers/vfio_user_spec.o 00:01:55.763 CXX test/cpp_headers/vhost.o 00:01:55.763 CXX test/cpp_headers/vmd.o 00:01:55.763 CXX test/cpp_headers/xor.o 00:01:55.763 CXX test/cpp_headers/zipf.o 00:01:56.329 LINK memory_ut 00:01:56.586 LINK cuse 00:01:56.843 LINK iscsi_fuzz 00:01:59.368 LINK esnap 00:01:59.626 00:01:59.626 real 0m45.580s 00:01:59.626 user 9m36.343s 00:01:59.626 sys 2m9.225s 00:01:59.626 05:57:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:59.626 05:57:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:59.626 ************************************ 00:01:59.626 END TEST make 00:01:59.626 ************************************ 00:01:59.884 05:57:06 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:59.884 05:57:06 -- nvmf/common.sh@7 -- # uname -s 00:01:59.884 05:57:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:59.884 05:57:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:59.884 05:57:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:59.884 05:57:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:59.884 05:57:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:59.884 05:57:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:59.884 05:57:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:59.884 05:57:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:59.884 05:57:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:59.884 05:57:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:59.884 05:57:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:59.884 05:57:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:59.884 05:57:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:59.884 05:57:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:59.884 05:57:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:59.884 05:57:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:59.884 05:57:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:59.884 05:57:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:59.884 05:57:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:59.884 05:57:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.884 05:57:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.884 05:57:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.884 05:57:06 -- paths/export.sh@5 -- # export PATH 00:01:59.884 05:57:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:59.884 05:57:06 -- nvmf/common.sh@46 -- # : 0 00:01:59.884 05:57:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:01:59.884 05:57:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:01:59.884 05:57:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:01:59.884 05:57:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:59.884 05:57:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:59.884 05:57:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:01:59.884 05:57:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:01:59.884 05:57:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:01:59.884 05:57:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:59.884 05:57:06 -- spdk/autotest.sh@32 -- # uname -s 00:01:59.884 05:57:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:59.884 05:57:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:59.884 05:57:06 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:59.884 05:57:06 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:59.884 05:57:06 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:59.884 05:57:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:59.884 05:57:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:59.884 05:57:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:59.884 05:57:06 -- spdk/autotest.sh@48 -- # udevadm_pid=962287 00:01:59.884 05:57:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:59.884 05:57:06 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:01:59.884 05:57:06 -- spdk/autotest.sh@54 -- # echo 962289 00:01:59.884 05:57:06 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:01:59.884 05:57:06 -- spdk/autotest.sh@56 -- # echo 962290 00:01:59.884 05:57:06 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:01:59.884 05:57:06 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:01:59.884 05:57:06 -- spdk/autotest.sh@60 -- # echo 962291 00:01:59.884 05:57:06 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:01:59.884 05:57:06 -- spdk/autotest.sh@62 -- # echo 962292 00:01:59.884 05:57:06 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:01:59.884 05:57:06 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:59.884 05:57:06 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:01:59.884 05:57:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:01:59.884 05:57:06 -- common/autotest_common.sh@10 -- # set +x 00:01:59.884 05:57:06 -- spdk/autotest.sh@70 -- # create_test_list 00:01:59.884 05:57:06 -- common/autotest_common.sh@736 -- # xtrace_disable 00:01:59.884 05:57:06 -- common/autotest_common.sh@10 -- # set +x 00:01:59.884 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:01:59.884 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:01:59.884 05:57:06 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:59.884 05:57:06 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:59.884 05:57:06 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:59.884 05:57:06 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:59.884 05:57:06 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:59.884 05:57:06 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:01:59.884 05:57:06 -- common/autotest_common.sh@1440 -- # uname 00:01:59.884 05:57:06 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:01:59.884 05:57:06 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:01:59.884 05:57:06 -- common/autotest_common.sh@1460 -- # uname 00:01:59.884 05:57:06 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:01:59.884 05:57:06 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:01:59.884 05:57:06 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:01:59.884 05:57:06 -- spdk/autotest.sh@83 -- # hash lcov 00:01:59.884 05:57:06 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:59.884 05:57:06 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:01:59.884 --rc lcov_branch_coverage=1 00:01:59.884 --rc lcov_function_coverage=1 00:01:59.884 --rc genhtml_branch_coverage=1 00:01:59.884 --rc genhtml_function_coverage=1 00:01:59.884 --rc genhtml_legend=1 00:01:59.884 --rc geninfo_all_blocks=1 00:01:59.884 ' 00:01:59.884 05:57:06 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:01:59.884 --rc lcov_branch_coverage=1 00:01:59.884 --rc lcov_function_coverage=1 00:01:59.884 --rc genhtml_branch_coverage=1 00:01:59.884 --rc genhtml_function_coverage=1 00:01:59.884 --rc genhtml_legend=1 00:01:59.884 --rc geninfo_all_blocks=1 00:01:59.884 ' 00:01:59.884 05:57:06 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:01:59.884 --rc lcov_branch_coverage=1 00:01:59.884 --rc lcov_function_coverage=1 00:01:59.884 --rc genhtml_branch_coverage=1 00:01:59.884 --rc genhtml_function_coverage=1 00:01:59.884 --rc genhtml_legend=1 00:01:59.884 --rc geninfo_all_blocks=1 00:01:59.884 --no-external' 00:01:59.884 05:57:06 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:01:59.885 --rc lcov_branch_coverage=1 00:01:59.885 --rc lcov_function_coverage=1 00:01:59.885 --rc genhtml_branch_coverage=1 00:01:59.885 --rc genhtml_function_coverage=1 00:01:59.885 --rc genhtml_legend=1 00:01:59.885 --rc geninfo_all_blocks=1 00:01:59.885 --no-external' 00:01:59.885 05:57:06 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:59.885 lcov: LCOV version 1.14 00:01:59.885 05:57:06 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:01.784 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:01.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:01.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:01.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:16.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:16.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:16.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:16.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:16.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:16.657 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:31.515 05:57:37 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:31.515 05:57:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:31.515 05:57:37 -- common/autotest_common.sh@10 -- # set +x 00:02:31.515 05:57:37 -- spdk/autotest.sh@102 -- # rm -f 00:02:31.515 05:57:37 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:32.890 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:32.890 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:32.890 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:32.890 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:32.890 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:32.890 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:32.890 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:32.890 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:32.890 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:32.890 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:32.890 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:32.890 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:32.890 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:32.890 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:32.890 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:32.890 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:32.890 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:32.890 05:57:39 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:32.890 05:57:39 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:32.890 05:57:39 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:32.890 05:57:39 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:32.890 05:57:39 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:32.890 05:57:39 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:32.890 05:57:39 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:32.890 05:57:39 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:32.890 05:57:39 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:32.890 05:57:39 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:32.890 05:57:39 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:02:32.890 05:57:39 -- spdk/autotest.sh@121 -- # grep -v p 00:02:32.890 05:57:39 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:32.890 05:57:39 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:32.890 05:57:39 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:32.890 05:57:39 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:32.890 05:57:39 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:32.890 No valid GPT data, bailing 00:02:32.890 05:57:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:32.890 05:57:39 -- scripts/common.sh@393 -- # pt= 00:02:32.890 05:57:39 -- scripts/common.sh@394 -- # return 1 00:02:32.890 05:57:39 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:32.890 1+0 records in 00:02:32.890 1+0 records out 00:02:32.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0021499 s, 488 MB/s 00:02:32.890 05:57:39 -- spdk/autotest.sh@129 -- # sync 00:02:32.890 05:57:39 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:32.890 05:57:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:32.890 05:57:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:34.792 05:57:41 -- spdk/autotest.sh@135 -- # uname -s 00:02:34.793 05:57:41 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:02:34.793 05:57:41 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:34.793 05:57:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:34.793 05:57:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:34.793 05:57:41 -- common/autotest_common.sh@10 -- # set +x 00:02:34.793 ************************************ 00:02:34.793 START TEST setup.sh 00:02:34.793 ************************************ 00:02:34.793 05:57:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:34.793 * Looking for test storage... 00:02:34.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:34.793 05:57:41 -- setup/test-setup.sh@10 -- # uname -s 00:02:34.793 05:57:41 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:34.793 05:57:41 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:34.793 05:57:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:34.793 05:57:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:34.793 05:57:41 -- common/autotest_common.sh@10 -- # set +x 00:02:34.793 ************************************ 00:02:34.793 START TEST acl 00:02:34.793 ************************************ 00:02:34.793 05:57:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:34.793 * Looking for test storage... 00:02:34.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:34.793 05:57:41 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:34.793 05:57:41 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:34.793 05:57:41 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:34.793 05:57:41 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:34.793 05:57:41 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:34.793 05:57:41 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:34.793 05:57:41 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:34.793 05:57:41 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:34.793 05:57:41 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:34.793 05:57:41 -- setup/acl.sh@12 -- # devs=() 00:02:34.793 05:57:41 -- setup/acl.sh@12 -- # declare -a devs 00:02:34.793 05:57:41 -- setup/acl.sh@13 -- # drivers=() 00:02:34.793 05:57:41 -- setup/acl.sh@13 -- # declare -A drivers 00:02:34.793 05:57:41 -- setup/acl.sh@51 -- # setup reset 00:02:34.793 05:57:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:34.793 05:57:41 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:36.168 05:57:42 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:36.168 05:57:42 -- setup/acl.sh@16 -- # local dev driver 00:02:36.168 05:57:42 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.168 05:57:42 -- setup/acl.sh@15 -- # setup output status 00:02:36.168 05:57:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:36.168 05:57:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:37.100 Hugepages 00:02:37.100 node hugesize free / total 00:02:37.100 05:57:43 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:37.100 05:57:43 -- setup/acl.sh@19 -- # continue 00:02:37.100 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.100 05:57:43 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:37.100 05:57:43 -- setup/acl.sh@19 -- # continue 00:02:37.100 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.100 05:57:43 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:37.100 05:57:43 -- setup/acl.sh@19 -- # continue 00:02:37.100 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.100 00:02:37.100 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:37.100 05:57:43 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:37.100 05:57:43 -- setup/acl.sh@19 -- # continue 00:02:37.100 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.100 05:57:43 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # continue 00:02:37.100 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.100 05:57:43 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # continue 00:02:37.100 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.100 05:57:43 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # continue 00:02:37.100 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.100 05:57:43 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # continue 00:02:37.100 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.100 05:57:43 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # continue 00:02:37.100 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.100 05:57:43 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # continue 00:02:37.100 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.100 05:57:43 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # continue 00:02:37.100 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.100 05:57:43 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # continue 00:02:37.100 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.100 05:57:43 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # continue 00:02:37.100 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.100 05:57:43 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.100 05:57:43 -- setup/acl.sh@20 -- # continue 00:02:37.100 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.359 05:57:43 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:37.359 05:57:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.359 05:57:43 -- setup/acl.sh@20 -- # continue 00:02:37.359 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.359 05:57:43 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:37.359 05:57:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.359 05:57:43 -- setup/acl.sh@20 -- # continue 00:02:37.359 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.359 05:57:43 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:37.359 05:57:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.359 05:57:43 -- setup/acl.sh@20 -- # continue 00:02:37.359 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.359 05:57:43 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:37.359 05:57:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.359 05:57:43 -- setup/acl.sh@20 -- # continue 00:02:37.359 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.359 05:57:43 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:37.359 05:57:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.359 05:57:43 -- setup/acl.sh@20 -- # continue 00:02:37.359 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.359 05:57:43 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:37.359 05:57:43 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.359 05:57:43 -- setup/acl.sh@20 -- # continue 00:02:37.359 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.359 05:57:43 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:37.359 05:57:43 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:37.359 05:57:43 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:37.359 05:57:43 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:37.359 05:57:43 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:37.359 05:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.359 05:57:43 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:37.359 05:57:43 -- setup/acl.sh@54 -- # run_test denied denied 00:02:37.359 05:57:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:37.359 05:57:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:37.359 05:57:43 -- common/autotest_common.sh@10 -- # set +x 00:02:37.359 ************************************ 00:02:37.359 START TEST denied 00:02:37.359 ************************************ 00:02:37.359 05:57:43 -- common/autotest_common.sh@1104 -- # denied 00:02:37.359 05:57:43 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:37.359 05:57:43 -- setup/acl.sh@38 -- # setup output config 00:02:37.359 05:57:43 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:37.359 05:57:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:37.359 05:57:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:38.735 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:38.735 05:57:45 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:38.735 05:57:45 -- setup/acl.sh@28 -- # local dev driver 00:02:38.735 05:57:45 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:38.735 05:57:45 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:38.735 05:57:45 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:38.735 05:57:45 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:38.735 05:57:45 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:38.735 05:57:45 -- setup/acl.sh@41 -- # setup reset 00:02:38.735 05:57:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:38.735 05:57:45 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:41.267 00:02:41.267 real 0m3.869s 00:02:41.267 user 0m1.173s 00:02:41.267 sys 0m1.800s 00:02:41.267 05:57:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:41.267 05:57:47 -- common/autotest_common.sh@10 -- # set +x 00:02:41.267 ************************************ 00:02:41.267 END TEST denied 00:02:41.267 ************************************ 00:02:41.267 05:57:47 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:41.267 05:57:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:41.267 05:57:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:41.267 05:57:47 -- common/autotest_common.sh@10 -- # set +x 00:02:41.267 ************************************ 00:02:41.267 START TEST allowed 00:02:41.267 ************************************ 00:02:41.267 05:57:47 -- common/autotest_common.sh@1104 -- # allowed 00:02:41.267 05:57:47 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:41.267 05:57:47 -- setup/acl.sh@45 -- # setup output config 00:02:41.267 05:57:47 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:41.267 05:57:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.267 05:57:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:43.809 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:43.809 05:57:49 -- setup/acl.sh@47 -- # verify 00:02:43.809 05:57:49 -- setup/acl.sh@28 -- # local dev driver 00:02:43.809 05:57:49 -- setup/acl.sh@48 -- # setup reset 00:02:43.809 05:57:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:43.809 05:57:49 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:45.183 00:02:45.183 real 0m3.880s 00:02:45.183 user 0m1.071s 00:02:45.183 sys 0m1.655s 00:02:45.183 05:57:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:45.183 05:57:51 -- common/autotest_common.sh@10 -- # set +x 00:02:45.183 ************************************ 00:02:45.183 END TEST allowed 00:02:45.183 ************************************ 00:02:45.183 00:02:45.183 real 0m10.260s 00:02:45.183 user 0m3.223s 00:02:45.183 sys 0m5.075s 00:02:45.183 05:57:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:45.183 05:57:51 -- common/autotest_common.sh@10 -- # set +x 00:02:45.183 ************************************ 00:02:45.183 END TEST acl 00:02:45.183 ************************************ 00:02:45.183 05:57:51 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:45.183 05:57:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:45.183 05:57:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:45.183 05:57:51 -- common/autotest_common.sh@10 -- # set +x 00:02:45.183 ************************************ 00:02:45.183 START TEST hugepages 00:02:45.183 ************************************ 00:02:45.183 05:57:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:45.183 * Looking for test storage... 00:02:45.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:45.183 05:57:51 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:45.183 05:57:51 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:45.183 05:57:51 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:45.183 05:57:51 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:45.183 05:57:51 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:45.183 05:57:51 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:45.183 05:57:51 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:45.183 05:57:51 -- setup/common.sh@18 -- # local node= 00:02:45.183 05:57:51 -- setup/common.sh@19 -- # local var val 00:02:45.183 05:57:51 -- setup/common.sh@20 -- # local mem_f mem 00:02:45.183 05:57:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:45.183 05:57:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:45.183 05:57:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:45.183 05:57:51 -- setup/common.sh@28 -- # mapfile -t mem 00:02:45.183 05:57:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:45.183 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.183 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.183 05:57:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43815940 kB' 'MemAvailable: 47316216 kB' 'Buffers: 2704 kB' 'Cached: 10169144 kB' 'SwapCached: 0 kB' 'Active: 7161600 kB' 'Inactive: 3506552 kB' 'Active(anon): 6767248 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499700 kB' 'Mapped: 173452 kB' 'Shmem: 6270944 kB' 'KReclaimable: 185988 kB' 'Slab: 550684 kB' 'SReclaimable: 185988 kB' 'SUnreclaim: 364696 kB' 'KernelStack: 12816 kB' 'PageTables: 8272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 7888680 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:45.183 05:57:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.183 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.183 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.183 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.183 05:57:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.183 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.183 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.183 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.184 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.184 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.185 05:57:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.185 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.185 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.185 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.185 05:57:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.185 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.185 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.185 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.185 05:57:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.185 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.185 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.185 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.185 05:57:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.185 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.185 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.185 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.185 05:57:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.185 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.185 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.185 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.185 05:57:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.185 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.185 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.185 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.185 05:57:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.185 05:57:51 -- setup/common.sh@32 -- # continue 00:02:45.185 05:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:02:45.185 05:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:02:45.185 05:57:51 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:45.185 05:57:51 -- setup/common.sh@33 -- # echo 2048 00:02:45.185 05:57:51 -- setup/common.sh@33 -- # return 0 00:02:45.185 05:57:51 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:45.185 05:57:51 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:45.185 05:57:51 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:45.185 05:57:51 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:45.185 05:57:51 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:45.185 05:57:51 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:45.185 05:57:51 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:45.185 05:57:51 -- setup/hugepages.sh@207 -- # get_nodes 00:02:45.185 05:57:51 -- setup/hugepages.sh@27 -- # local node 00:02:45.185 05:57:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.185 05:57:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:45.185 05:57:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:45.185 05:57:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:45.185 05:57:51 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:45.185 05:57:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:45.185 05:57:51 -- setup/hugepages.sh@208 -- # clear_hp 00:02:45.185 05:57:51 -- setup/hugepages.sh@37 -- # local node hp 00:02:45.185 05:57:51 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:45.185 05:57:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.185 05:57:51 -- setup/hugepages.sh@41 -- # echo 0 00:02:45.185 05:57:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.185 05:57:51 -- setup/hugepages.sh@41 -- # echo 0 00:02:45.185 05:57:51 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:45.185 05:57:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.185 05:57:51 -- setup/hugepages.sh@41 -- # echo 0 00:02:45.185 05:57:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:45.185 05:57:51 -- setup/hugepages.sh@41 -- # echo 0 00:02:45.185 05:57:51 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:45.185 05:57:51 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:45.185 05:57:51 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:45.185 05:57:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:45.185 05:57:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:45.185 05:57:51 -- common/autotest_common.sh@10 -- # set +x 00:02:45.185 ************************************ 00:02:45.185 START TEST default_setup 00:02:45.185 ************************************ 00:02:45.185 05:57:51 -- common/autotest_common.sh@1104 -- # default_setup 00:02:45.185 05:57:51 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:45.185 05:57:51 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:45.185 05:57:51 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:45.185 05:57:51 -- setup/hugepages.sh@51 -- # shift 00:02:45.185 05:57:51 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:45.185 05:57:51 -- setup/hugepages.sh@52 -- # local node_ids 00:02:45.185 05:57:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:45.185 05:57:51 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:45.185 05:57:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:45.185 05:57:51 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:45.185 05:57:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:45.185 05:57:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:45.185 05:57:51 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:45.185 05:57:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:45.185 05:57:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:45.185 05:57:51 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:45.185 05:57:51 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:45.185 05:57:51 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:45.185 05:57:51 -- setup/hugepages.sh@73 -- # return 0 00:02:45.185 05:57:51 -- setup/hugepages.sh@137 -- # setup output 00:02:45.185 05:57:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:45.185 05:57:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:46.557 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:46.557 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:46.557 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:46.557 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:46.557 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:46.557 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:46.557 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:46.557 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:46.557 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:46.557 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:46.557 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:46.557 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:46.557 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:46.557 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:46.557 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:46.557 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:47.493 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:47.493 05:57:53 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:47.493 05:57:53 -- setup/hugepages.sh@89 -- # local node 00:02:47.493 05:57:53 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:47.493 05:57:53 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:47.493 05:57:53 -- setup/hugepages.sh@92 -- # local surp 00:02:47.493 05:57:53 -- setup/hugepages.sh@93 -- # local resv 00:02:47.493 05:57:53 -- setup/hugepages.sh@94 -- # local anon 00:02:47.493 05:57:53 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:47.493 05:57:53 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:47.493 05:57:53 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:47.493 05:57:53 -- setup/common.sh@18 -- # local node= 00:02:47.493 05:57:53 -- setup/common.sh@19 -- # local var val 00:02:47.493 05:57:53 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.493 05:57:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.493 05:57:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.493 05:57:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.493 05:57:53 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.493 05:57:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45919876 kB' 'MemAvailable: 49420152 kB' 'Buffers: 2704 kB' 'Cached: 10169236 kB' 'SwapCached: 0 kB' 'Active: 7178804 kB' 'Inactive: 3506552 kB' 'Active(anon): 6784452 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516648 kB' 'Mapped: 173508 kB' 'Shmem: 6271036 kB' 'KReclaimable: 185988 kB' 'Slab: 550340 kB' 'SReclaimable: 185988 kB' 'SUnreclaim: 364352 kB' 'KernelStack: 12816 kB' 'PageTables: 8104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7908624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.493 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.493 05:57:53 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.494 05:57:53 -- setup/common.sh@33 -- # echo 0 00:02:47.494 05:57:53 -- setup/common.sh@33 -- # return 0 00:02:47.494 05:57:53 -- setup/hugepages.sh@97 -- # anon=0 00:02:47.494 05:57:53 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:47.494 05:57:53 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.494 05:57:53 -- setup/common.sh@18 -- # local node= 00:02:47.494 05:57:53 -- setup/common.sh@19 -- # local var val 00:02:47.494 05:57:53 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.494 05:57:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.494 05:57:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.494 05:57:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.494 05:57:53 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.494 05:57:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45923084 kB' 'MemAvailable: 49423344 kB' 'Buffers: 2704 kB' 'Cached: 10169240 kB' 'SwapCached: 0 kB' 'Active: 7178344 kB' 'Inactive: 3506552 kB' 'Active(anon): 6783992 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516232 kB' 'Mapped: 173500 kB' 'Shmem: 6271040 kB' 'KReclaimable: 185956 kB' 'Slab: 550364 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364408 kB' 'KernelStack: 12752 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7908636 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.494 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.494 05:57:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.495 05:57:53 -- setup/common.sh@33 -- # echo 0 00:02:47.495 05:57:53 -- setup/common.sh@33 -- # return 0 00:02:47.495 05:57:53 -- setup/hugepages.sh@99 -- # surp=0 00:02:47.495 05:57:53 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:47.495 05:57:53 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:47.495 05:57:53 -- setup/common.sh@18 -- # local node= 00:02:47.495 05:57:53 -- setup/common.sh@19 -- # local var val 00:02:47.495 05:57:53 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.495 05:57:53 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.495 05:57:53 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.495 05:57:53 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.495 05:57:53 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.495 05:57:53 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.495 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.495 05:57:53 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45923380 kB' 'MemAvailable: 49423640 kB' 'Buffers: 2704 kB' 'Cached: 10169252 kB' 'SwapCached: 0 kB' 'Active: 7178084 kB' 'Inactive: 3506552 kB' 'Active(anon): 6783732 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515988 kB' 'Mapped: 173496 kB' 'Shmem: 6271052 kB' 'KReclaimable: 185956 kB' 'Slab: 550428 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364472 kB' 'KernelStack: 12784 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7908652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.496 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.496 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.497 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.497 05:57:53 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.497 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.497 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.497 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.497 05:57:53 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.497 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.497 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.497 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.497 05:57:53 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.497 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.497 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.497 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.497 05:57:53 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.497 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.497 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.497 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.497 05:57:53 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.497 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.497 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.497 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.497 05:57:53 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.497 05:57:53 -- setup/common.sh@32 -- # continue 00:02:47.497 05:57:53 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.497 05:57:53 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.497 05:57:53 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.497 05:57:53 -- setup/common.sh@33 -- # echo 0 00:02:47.497 05:57:53 -- setup/common.sh@33 -- # return 0 00:02:47.497 05:57:53 -- setup/hugepages.sh@100 -- # resv=0 00:02:47.497 05:57:53 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:47.497 nr_hugepages=1024 00:02:47.497 05:57:53 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:47.497 resv_hugepages=0 00:02:47.497 05:57:53 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:47.497 surplus_hugepages=0 00:02:47.497 05:57:53 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:47.497 anon_hugepages=0 00:02:47.497 05:57:53 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:47.497 05:57:53 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:47.497 05:57:53 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:47.497 05:57:53 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:47.497 05:57:53 -- setup/common.sh@18 -- # local node= 00:02:47.497 05:57:54 -- setup/common.sh@19 -- # local var val 00:02:47.497 05:57:54 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.497 05:57:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.497 05:57:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.497 05:57:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.757 05:57:54 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.757 05:57:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45922624 kB' 'MemAvailable: 49422884 kB' 'Buffers: 2704 kB' 'Cached: 10169264 kB' 'SwapCached: 0 kB' 'Active: 7178092 kB' 'Inactive: 3506552 kB' 'Active(anon): 6783740 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515988 kB' 'Mapped: 173496 kB' 'Shmem: 6271064 kB' 'KReclaimable: 185956 kB' 'Slab: 550428 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364472 kB' 'KernelStack: 12784 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7908668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.757 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.757 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.758 05:57:54 -- setup/common.sh@33 -- # echo 1024 00:02:47.758 05:57:54 -- setup/common.sh@33 -- # return 0 00:02:47.758 05:57:54 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:47.758 05:57:54 -- setup/hugepages.sh@112 -- # get_nodes 00:02:47.758 05:57:54 -- setup/hugepages.sh@27 -- # local node 00:02:47.758 05:57:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.758 05:57:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:47.758 05:57:54 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.758 05:57:54 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:47.758 05:57:54 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:47.758 05:57:54 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:47.758 05:57:54 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:47.758 05:57:54 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:47.758 05:57:54 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:47.758 05:57:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.758 05:57:54 -- setup/common.sh@18 -- # local node=0 00:02:47.758 05:57:54 -- setup/common.sh@19 -- # local var val 00:02:47.758 05:57:54 -- setup/common.sh@20 -- # local mem_f mem 00:02:47.758 05:57:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.758 05:57:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:47.758 05:57:54 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:47.758 05:57:54 -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.758 05:57:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27769916 kB' 'MemUsed: 5059968 kB' 'SwapCached: 0 kB' 'Active: 1889428 kB' 'Inactive: 108696 kB' 'Active(anon): 1778540 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1751072 kB' 'Mapped: 48476 kB' 'AnonPages: 250188 kB' 'Shmem: 1531488 kB' 'KernelStack: 7864 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84212 kB' 'Slab: 301684 kB' 'SReclaimable: 84212 kB' 'SUnreclaim: 217472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.758 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.758 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # continue 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # IFS=': ' 00:02:47.759 05:57:54 -- setup/common.sh@31 -- # read -r var val _ 00:02:47.759 05:57:54 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.759 05:57:54 -- setup/common.sh@33 -- # echo 0 00:02:47.759 05:57:54 -- setup/common.sh@33 -- # return 0 00:02:47.759 05:57:54 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:47.759 05:57:54 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:47.759 05:57:54 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:47.759 05:57:54 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:47.759 05:57:54 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:47.759 node0=1024 expecting 1024 00:02:47.759 05:57:54 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:47.759 00:02:47.759 real 0m2.443s 00:02:47.759 user 0m0.646s 00:02:47.759 sys 0m0.866s 00:02:47.759 05:57:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:47.759 05:57:54 -- common/autotest_common.sh@10 -- # set +x 00:02:47.759 ************************************ 00:02:47.759 END TEST default_setup 00:02:47.759 ************************************ 00:02:47.759 05:57:54 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:47.759 05:57:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:47.759 05:57:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:47.759 05:57:54 -- common/autotest_common.sh@10 -- # set +x 00:02:47.759 ************************************ 00:02:47.759 START TEST per_node_1G_alloc 00:02:47.759 ************************************ 00:02:47.759 05:57:54 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:02:47.759 05:57:54 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:47.759 05:57:54 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:47.759 05:57:54 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:47.759 05:57:54 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:47.759 05:57:54 -- setup/hugepages.sh@51 -- # shift 00:02:47.759 05:57:54 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:47.759 05:57:54 -- setup/hugepages.sh@52 -- # local node_ids 00:02:47.759 05:57:54 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:47.759 05:57:54 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:47.759 05:57:54 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:47.759 05:57:54 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:47.759 05:57:54 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:47.759 05:57:54 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:47.759 05:57:54 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:47.759 05:57:54 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:47.759 05:57:54 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:47.759 05:57:54 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:47.759 05:57:54 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:47.759 05:57:54 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:47.759 05:57:54 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:47.759 05:57:54 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:47.760 05:57:54 -- setup/hugepages.sh@73 -- # return 0 00:02:47.760 05:57:54 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:47.760 05:57:54 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:47.760 05:57:54 -- setup/hugepages.sh@146 -- # setup output 00:02:47.760 05:57:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.760 05:57:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:48.693 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:48.693 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:48.954 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:48.954 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:48.954 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:48.954 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:48.955 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:48.955 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:48.955 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:48.955 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:48.955 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:48.955 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:48.955 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:48.955 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:48.955 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:48.955 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:48.955 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:48.955 05:57:55 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:48.955 05:57:55 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:48.955 05:57:55 -- setup/hugepages.sh@89 -- # local node 00:02:48.955 05:57:55 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:48.955 05:57:55 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:48.955 05:57:55 -- setup/hugepages.sh@92 -- # local surp 00:02:48.955 05:57:55 -- setup/hugepages.sh@93 -- # local resv 00:02:48.955 05:57:55 -- setup/hugepages.sh@94 -- # local anon 00:02:48.955 05:57:55 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:48.955 05:57:55 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:48.955 05:57:55 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:48.955 05:57:55 -- setup/common.sh@18 -- # local node= 00:02:48.955 05:57:55 -- setup/common.sh@19 -- # local var val 00:02:48.955 05:57:55 -- setup/common.sh@20 -- # local mem_f mem 00:02:48.955 05:57:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.955 05:57:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.955 05:57:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.955 05:57:55 -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.955 05:57:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45917992 kB' 'MemAvailable: 49418252 kB' 'Buffers: 2704 kB' 'Cached: 10169312 kB' 'SwapCached: 0 kB' 'Active: 7178664 kB' 'Inactive: 3506552 kB' 'Active(anon): 6784312 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516400 kB' 'Mapped: 173544 kB' 'Shmem: 6271112 kB' 'KReclaimable: 185956 kB' 'Slab: 550764 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364808 kB' 'KernelStack: 12848 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7908828 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.955 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.955 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.956 05:57:55 -- setup/common.sh@33 -- # echo 0 00:02:48.956 05:57:55 -- setup/common.sh@33 -- # return 0 00:02:48.956 05:57:55 -- setup/hugepages.sh@97 -- # anon=0 00:02:48.956 05:57:55 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:48.956 05:57:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.956 05:57:55 -- setup/common.sh@18 -- # local node= 00:02:48.956 05:57:55 -- setup/common.sh@19 -- # local var val 00:02:48.956 05:57:55 -- setup/common.sh@20 -- # local mem_f mem 00:02:48.956 05:57:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.956 05:57:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.956 05:57:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.956 05:57:55 -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.956 05:57:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45920264 kB' 'MemAvailable: 49420524 kB' 'Buffers: 2704 kB' 'Cached: 10169312 kB' 'SwapCached: 0 kB' 'Active: 7179272 kB' 'Inactive: 3506552 kB' 'Active(anon): 6784920 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517032 kB' 'Mapped: 173620 kB' 'Shmem: 6271112 kB' 'KReclaimable: 185956 kB' 'Slab: 550832 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364876 kB' 'KernelStack: 12864 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7908840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.956 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.956 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.957 05:57:55 -- setup/common.sh@33 -- # echo 0 00:02:48.957 05:57:55 -- setup/common.sh@33 -- # return 0 00:02:48.957 05:57:55 -- setup/hugepages.sh@99 -- # surp=0 00:02:48.957 05:57:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:48.957 05:57:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:48.957 05:57:55 -- setup/common.sh@18 -- # local node= 00:02:48.957 05:57:55 -- setup/common.sh@19 -- # local var val 00:02:48.957 05:57:55 -- setup/common.sh@20 -- # local mem_f mem 00:02:48.957 05:57:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.957 05:57:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.957 05:57:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.957 05:57:55 -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.957 05:57:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45920888 kB' 'MemAvailable: 49421148 kB' 'Buffers: 2704 kB' 'Cached: 10169324 kB' 'SwapCached: 0 kB' 'Active: 7178288 kB' 'Inactive: 3506552 kB' 'Active(anon): 6783936 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515992 kB' 'Mapped: 173504 kB' 'Shmem: 6271124 kB' 'KReclaimable: 185956 kB' 'Slab: 550828 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364872 kB' 'KernelStack: 12880 kB' 'PageTables: 8076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7908852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.957 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.957 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # continue 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:48.958 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:48.958 05:57:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.219 05:57:55 -- setup/common.sh@33 -- # echo 0 00:02:49.219 05:57:55 -- setup/common.sh@33 -- # return 0 00:02:49.219 05:57:55 -- setup/hugepages.sh@100 -- # resv=0 00:02:49.219 05:57:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:49.219 nr_hugepages=1024 00:02:49.219 05:57:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:49.219 resv_hugepages=0 00:02:49.219 05:57:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:49.219 surplus_hugepages=0 00:02:49.219 05:57:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:49.219 anon_hugepages=0 00:02:49.219 05:57:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.219 05:57:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:49.219 05:57:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:49.219 05:57:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:49.219 05:57:55 -- setup/common.sh@18 -- # local node= 00:02:49.219 05:57:55 -- setup/common.sh@19 -- # local var val 00:02:49.219 05:57:55 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.219 05:57:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.219 05:57:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.219 05:57:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.219 05:57:55 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.219 05:57:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45921312 kB' 'MemAvailable: 49421572 kB' 'Buffers: 2704 kB' 'Cached: 10169340 kB' 'SwapCached: 0 kB' 'Active: 7178484 kB' 'Inactive: 3506552 kB' 'Active(anon): 6784132 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516220 kB' 'Mapped: 173504 kB' 'Shmem: 6271140 kB' 'KReclaimable: 185956 kB' 'Slab: 550828 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364872 kB' 'KernelStack: 12896 kB' 'PageTables: 8132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7908868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.219 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.219 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.220 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.220 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.221 05:57:55 -- setup/common.sh@33 -- # echo 1024 00:02:49.221 05:57:55 -- setup/common.sh@33 -- # return 0 00:02:49.221 05:57:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:49.221 05:57:55 -- setup/hugepages.sh@112 -- # get_nodes 00:02:49.221 05:57:55 -- setup/hugepages.sh@27 -- # local node 00:02:49.221 05:57:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.221 05:57:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:49.221 05:57:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:49.221 05:57:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:49.221 05:57:55 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:49.221 05:57:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:49.221 05:57:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.221 05:57:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.221 05:57:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:49.221 05:57:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.221 05:57:55 -- setup/common.sh@18 -- # local node=0 00:02:49.221 05:57:55 -- setup/common.sh@19 -- # local var val 00:02:49.221 05:57:55 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.221 05:57:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.221 05:57:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:49.221 05:57:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:49.221 05:57:55 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.221 05:57:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28825824 kB' 'MemUsed: 4004060 kB' 'SwapCached: 0 kB' 'Active: 1889920 kB' 'Inactive: 108696 kB' 'Active(anon): 1779032 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1751076 kB' 'Mapped: 48476 kB' 'AnonPages: 250652 kB' 'Shmem: 1531492 kB' 'KernelStack: 7976 kB' 'PageTables: 4624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84212 kB' 'Slab: 301924 kB' 'SReclaimable: 84212 kB' 'SUnreclaim: 217712 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.221 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.221 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@33 -- # echo 0 00:02:49.222 05:57:55 -- setup/common.sh@33 -- # return 0 00:02:49.222 05:57:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.222 05:57:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:49.222 05:57:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:49.222 05:57:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:49.222 05:57:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.222 05:57:55 -- setup/common.sh@18 -- # local node=1 00:02:49.222 05:57:55 -- setup/common.sh@19 -- # local var val 00:02:49.222 05:57:55 -- setup/common.sh@20 -- # local mem_f mem 00:02:49.222 05:57:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.222 05:57:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:49.222 05:57:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:49.222 05:57:55 -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.222 05:57:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17095696 kB' 'MemUsed: 10616128 kB' 'SwapCached: 0 kB' 'Active: 5288348 kB' 'Inactive: 3397856 kB' 'Active(anon): 5004884 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8420988 kB' 'Mapped: 125088 kB' 'AnonPages: 265288 kB' 'Shmem: 4739668 kB' 'KernelStack: 4920 kB' 'PageTables: 3508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101744 kB' 'Slab: 248904 kB' 'SReclaimable: 101744 kB' 'SUnreclaim: 147160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.222 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.222 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # continue 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # IFS=': ' 00:02:49.223 05:57:55 -- setup/common.sh@31 -- # read -r var val _ 00:02:49.223 05:57:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.223 05:57:55 -- setup/common.sh@33 -- # echo 0 00:02:49.223 05:57:55 -- setup/common.sh@33 -- # return 0 00:02:49.223 05:57:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:49.223 05:57:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.223 05:57:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.223 05:57:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.223 05:57:55 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:49.223 node0=512 expecting 512 00:02:49.223 05:57:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:49.223 05:57:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:49.223 05:57:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:49.223 05:57:55 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:49.223 node1=512 expecting 512 00:02:49.223 05:57:55 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:49.223 00:02:49.223 real 0m1.466s 00:02:49.223 user 0m0.628s 00:02:49.223 sys 0m0.804s 00:02:49.223 05:57:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:49.223 05:57:55 -- common/autotest_common.sh@10 -- # set +x 00:02:49.223 ************************************ 00:02:49.223 END TEST per_node_1G_alloc 00:02:49.223 ************************************ 00:02:49.223 05:57:55 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:49.223 05:57:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:49.223 05:57:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:49.223 05:57:55 -- common/autotest_common.sh@10 -- # set +x 00:02:49.223 ************************************ 00:02:49.223 START TEST even_2G_alloc 00:02:49.223 ************************************ 00:02:49.223 05:57:55 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:02:49.223 05:57:55 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:49.223 05:57:55 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:49.223 05:57:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:49.223 05:57:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:49.223 05:57:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:49.223 05:57:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:49.223 05:57:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:49.223 05:57:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:49.223 05:57:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:49.223 05:57:55 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:49.223 05:57:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:49.223 05:57:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:49.223 05:57:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:49.223 05:57:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:49.223 05:57:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:49.223 05:57:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:49.223 05:57:55 -- setup/hugepages.sh@83 -- # : 512 00:02:49.223 05:57:55 -- setup/hugepages.sh@84 -- # : 1 00:02:49.223 05:57:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:49.223 05:57:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:49.223 05:57:55 -- setup/hugepages.sh@83 -- # : 0 00:02:49.223 05:57:55 -- setup/hugepages.sh@84 -- # : 0 00:02:49.223 05:57:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:49.223 05:57:55 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:49.223 05:57:55 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:49.223 05:57:55 -- setup/hugepages.sh@153 -- # setup output 00:02:49.223 05:57:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.223 05:57:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:50.157 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:50.157 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:50.157 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:50.157 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:50.157 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:50.157 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:50.157 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:50.157 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:50.157 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:50.157 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:50.157 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:50.157 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:50.157 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:50.157 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:50.157 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:50.157 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:50.157 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:50.420 05:57:56 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:50.420 05:57:56 -- setup/hugepages.sh@89 -- # local node 00:02:50.420 05:57:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:50.420 05:57:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:50.420 05:57:56 -- setup/hugepages.sh@92 -- # local surp 00:02:50.420 05:57:56 -- setup/hugepages.sh@93 -- # local resv 00:02:50.420 05:57:56 -- setup/hugepages.sh@94 -- # local anon 00:02:50.420 05:57:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:50.420 05:57:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:50.420 05:57:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:50.420 05:57:56 -- setup/common.sh@18 -- # local node= 00:02:50.420 05:57:56 -- setup/common.sh@19 -- # local var val 00:02:50.420 05:57:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.420 05:57:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.420 05:57:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.420 05:57:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.420 05:57:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.420 05:57:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.420 05:57:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45917368 kB' 'MemAvailable: 49417628 kB' 'Buffers: 2704 kB' 'Cached: 10169416 kB' 'SwapCached: 0 kB' 'Active: 7182972 kB' 'Inactive: 3506552 kB' 'Active(anon): 6788620 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520660 kB' 'Mapped: 173992 kB' 'Shmem: 6271216 kB' 'KReclaimable: 185956 kB' 'Slab: 550308 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364352 kB' 'KernelStack: 12800 kB' 'PageTables: 7868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7915724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.420 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.420 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:50.421 05:57:56 -- setup/common.sh@33 -- # echo 0 00:02:50.421 05:57:56 -- setup/common.sh@33 -- # return 0 00:02:50.421 05:57:56 -- setup/hugepages.sh@97 -- # anon=0 00:02:50.421 05:57:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:50.421 05:57:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.421 05:57:56 -- setup/common.sh@18 -- # local node= 00:02:50.421 05:57:56 -- setup/common.sh@19 -- # local var val 00:02:50.421 05:57:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.421 05:57:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.421 05:57:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.421 05:57:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.421 05:57:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.421 05:57:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45921544 kB' 'MemAvailable: 49421804 kB' 'Buffers: 2704 kB' 'Cached: 10169420 kB' 'SwapCached: 0 kB' 'Active: 7184212 kB' 'Inactive: 3506552 kB' 'Active(anon): 6789860 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521952 kB' 'Mapped: 174428 kB' 'Shmem: 6271220 kB' 'KReclaimable: 185956 kB' 'Slab: 550276 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364320 kB' 'KernelStack: 12816 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7915196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196020 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.421 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.421 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.422 05:57:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.422 05:57:56 -- setup/common.sh@33 -- # echo 0 00:02:50.422 05:57:56 -- setup/common.sh@33 -- # return 0 00:02:50.422 05:57:56 -- setup/hugepages.sh@99 -- # surp=0 00:02:50.422 05:57:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:50.422 05:57:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:50.422 05:57:56 -- setup/common.sh@18 -- # local node= 00:02:50.422 05:57:56 -- setup/common.sh@19 -- # local var val 00:02:50.422 05:57:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.422 05:57:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.422 05:57:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.422 05:57:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.422 05:57:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.422 05:57:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.422 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45921504 kB' 'MemAvailable: 49421764 kB' 'Buffers: 2704 kB' 'Cached: 10169428 kB' 'SwapCached: 0 kB' 'Active: 7179696 kB' 'Inactive: 3506552 kB' 'Active(anon): 6785344 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 517388 kB' 'Mapped: 174428 kB' 'Shmem: 6271228 kB' 'KReclaimable: 185956 kB' 'Slab: 550304 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364348 kB' 'KernelStack: 12832 kB' 'PageTables: 7916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7910844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.423 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.423 05:57:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:50.424 05:57:56 -- setup/common.sh@33 -- # echo 0 00:02:50.424 05:57:56 -- setup/common.sh@33 -- # return 0 00:02:50.424 05:57:56 -- setup/hugepages.sh@100 -- # resv=0 00:02:50.424 05:57:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:50.424 nr_hugepages=1024 00:02:50.424 05:57:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:50.424 resv_hugepages=0 00:02:50.424 05:57:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:50.424 surplus_hugepages=0 00:02:50.424 05:57:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:50.424 anon_hugepages=0 00:02:50.424 05:57:56 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:50.424 05:57:56 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:50.424 05:57:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:50.424 05:57:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:50.424 05:57:56 -- setup/common.sh@18 -- # local node= 00:02:50.424 05:57:56 -- setup/common.sh@19 -- # local var val 00:02:50.424 05:57:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.424 05:57:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.424 05:57:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:50.424 05:57:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:50.424 05:57:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.424 05:57:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45919456 kB' 'MemAvailable: 49419716 kB' 'Buffers: 2704 kB' 'Cached: 10169444 kB' 'SwapCached: 0 kB' 'Active: 7183600 kB' 'Inactive: 3506552 kB' 'Active(anon): 6789248 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521252 kB' 'Mapped: 173948 kB' 'Shmem: 6271244 kB' 'KReclaimable: 185956 kB' 'Slab: 550304 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364348 kB' 'KernelStack: 12768 kB' 'PageTables: 7684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7915224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196004 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.424 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.424 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.425 05:57:56 -- setup/common.sh@33 -- # echo 1024 00:02:50.425 05:57:56 -- setup/common.sh@33 -- # return 0 00:02:50.425 05:57:56 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:50.425 05:57:56 -- setup/hugepages.sh@112 -- # get_nodes 00:02:50.425 05:57:56 -- setup/hugepages.sh@27 -- # local node 00:02:50.425 05:57:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.425 05:57:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:50.425 05:57:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.425 05:57:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:50.425 05:57:56 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:50.425 05:57:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:50.425 05:57:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:50.425 05:57:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:50.425 05:57:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:50.425 05:57:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.425 05:57:56 -- setup/common.sh@18 -- # local node=0 00:02:50.425 05:57:56 -- setup/common.sh@19 -- # local var val 00:02:50.425 05:57:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.425 05:57:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.425 05:57:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:50.425 05:57:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:50.425 05:57:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.425 05:57:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.425 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.425 05:57:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28820456 kB' 'MemUsed: 4009428 kB' 'SwapCached: 0 kB' 'Active: 1895320 kB' 'Inactive: 108696 kB' 'Active(anon): 1784432 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1751084 kB' 'Mapped: 49240 kB' 'AnonPages: 256052 kB' 'Shmem: 1531500 kB' 'KernelStack: 7912 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84212 kB' 'Slab: 301576 kB' 'SReclaimable: 84212 kB' 'SUnreclaim: 217364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.425 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@33 -- # echo 0 00:02:50.426 05:57:56 -- setup/common.sh@33 -- # return 0 00:02:50.426 05:57:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:50.426 05:57:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:50.426 05:57:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:50.426 05:57:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:50.426 05:57:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.426 05:57:56 -- setup/common.sh@18 -- # local node=1 00:02:50.426 05:57:56 -- setup/common.sh@19 -- # local var val 00:02:50.426 05:57:56 -- setup/common.sh@20 -- # local mem_f mem 00:02:50.426 05:57:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.426 05:57:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:50.426 05:57:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:50.426 05:57:56 -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.426 05:57:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.426 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.426 05:57:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17099000 kB' 'MemUsed: 10612824 kB' 'SwapCached: 0 kB' 'Active: 5288796 kB' 'Inactive: 3397856 kB' 'Active(anon): 5005332 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8421080 kB' 'Mapped: 125188 kB' 'AnonPages: 265732 kB' 'Shmem: 4739760 kB' 'KernelStack: 4872 kB' 'PageTables: 3384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101744 kB' 'Slab: 248728 kB' 'SReclaimable: 101744 kB' 'SUnreclaim: 146984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.426 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.427 05:57:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.427 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.427 05:57:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.427 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.427 05:57:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.427 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.427 05:57:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.427 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.427 05:57:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.427 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.427 05:57:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.427 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.427 05:57:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.427 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.427 05:57:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.427 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.427 05:57:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.427 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.427 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.685 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.685 05:57:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.686 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.686 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.686 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.686 05:57:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.686 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.686 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.686 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.686 05:57:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.686 05:57:56 -- setup/common.sh@32 -- # continue 00:02:50.686 05:57:56 -- setup/common.sh@31 -- # IFS=': ' 00:02:50.686 05:57:56 -- setup/common.sh@31 -- # read -r var val _ 00:02:50.686 05:57:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.686 05:57:56 -- setup/common.sh@33 -- # echo 0 00:02:50.686 05:57:56 -- setup/common.sh@33 -- # return 0 00:02:50.686 05:57:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:50.686 05:57:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.686 05:57:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.686 05:57:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.686 05:57:56 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:50.686 node0=512 expecting 512 00:02:50.686 05:57:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.686 05:57:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.686 05:57:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.686 05:57:56 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:50.686 node1=512 expecting 512 00:02:50.686 05:57:56 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:50.686 00:02:50.686 real 0m1.375s 00:02:50.686 user 0m0.560s 00:02:50.686 sys 0m0.780s 00:02:50.686 05:57:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:50.686 05:57:56 -- common/autotest_common.sh@10 -- # set +x 00:02:50.686 ************************************ 00:02:50.686 END TEST even_2G_alloc 00:02:50.686 ************************************ 00:02:50.686 05:57:56 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:50.686 05:57:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:50.686 05:57:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:50.686 05:57:56 -- common/autotest_common.sh@10 -- # set +x 00:02:50.686 ************************************ 00:02:50.686 START TEST odd_alloc 00:02:50.686 ************************************ 00:02:50.686 05:57:56 -- common/autotest_common.sh@1104 -- # odd_alloc 00:02:50.686 05:57:56 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:50.686 05:57:56 -- setup/hugepages.sh@49 -- # local size=2098176 00:02:50.686 05:57:56 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:50.686 05:57:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:50.686 05:57:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:50.686 05:57:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:50.686 05:57:56 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:50.686 05:57:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:50.686 05:57:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:50.686 05:57:56 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:50.686 05:57:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:50.686 05:57:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:50.686 05:57:56 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:50.686 05:57:56 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:50.686 05:57:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.686 05:57:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:50.686 05:57:56 -- setup/hugepages.sh@83 -- # : 513 00:02:50.686 05:57:56 -- setup/hugepages.sh@84 -- # : 1 00:02:50.686 05:57:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.686 05:57:56 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:50.686 05:57:56 -- setup/hugepages.sh@83 -- # : 0 00:02:50.686 05:57:56 -- setup/hugepages.sh@84 -- # : 0 00:02:50.686 05:57:56 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.686 05:57:56 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:50.686 05:57:56 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:50.686 05:57:56 -- setup/hugepages.sh@160 -- # setup output 00:02:50.686 05:57:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.686 05:57:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:52.064 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:52.064 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:52.064 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:52.064 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:52.064 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:52.064 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:52.064 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:52.064 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:52.064 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:52.064 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:52.064 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:52.064 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:52.064 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:52.064 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:52.064 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:52.064 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:52.064 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:52.064 05:57:58 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:52.064 05:57:58 -- setup/hugepages.sh@89 -- # local node 00:02:52.064 05:57:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:52.064 05:57:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:52.064 05:57:58 -- setup/hugepages.sh@92 -- # local surp 00:02:52.064 05:57:58 -- setup/hugepages.sh@93 -- # local resv 00:02:52.064 05:57:58 -- setup/hugepages.sh@94 -- # local anon 00:02:52.064 05:57:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:52.064 05:57:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:52.064 05:57:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:52.064 05:57:58 -- setup/common.sh@18 -- # local node= 00:02:52.064 05:57:58 -- setup/common.sh@19 -- # local var val 00:02:52.064 05:57:58 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.064 05:57:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.064 05:57:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.064 05:57:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.064 05:57:58 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.064 05:57:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.064 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.064 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.064 05:57:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45913336 kB' 'MemAvailable: 49413596 kB' 'Buffers: 2704 kB' 'Cached: 10169500 kB' 'SwapCached: 0 kB' 'Active: 7177100 kB' 'Inactive: 3506552 kB' 'Active(anon): 6782748 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514668 kB' 'Mapped: 172656 kB' 'Shmem: 6271300 kB' 'KReclaimable: 185956 kB' 'Slab: 550260 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364304 kB' 'KernelStack: 12944 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 7899392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196224 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:52.064 05:57:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.064 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.064 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.064 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.064 05:57:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.064 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.064 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.064 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.064 05:57:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.064 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.064 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.064 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.064 05:57:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.064 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.064 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.064 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.064 05:57:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.064 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.064 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.065 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.065 05:57:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.065 05:57:58 -- setup/common.sh@33 -- # echo 0 00:02:52.065 05:57:58 -- setup/common.sh@33 -- # return 0 00:02:52.065 05:57:58 -- setup/hugepages.sh@97 -- # anon=0 00:02:52.065 05:57:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:52.065 05:57:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.065 05:57:58 -- setup/common.sh@18 -- # local node= 00:02:52.066 05:57:58 -- setup/common.sh@19 -- # local var val 00:02:52.066 05:57:58 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.066 05:57:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.066 05:57:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.066 05:57:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.066 05:57:58 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.066 05:57:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45913552 kB' 'MemAvailable: 49413812 kB' 'Buffers: 2704 kB' 'Cached: 10169504 kB' 'SwapCached: 0 kB' 'Active: 7177472 kB' 'Inactive: 3506552 kB' 'Active(anon): 6783120 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515012 kB' 'Mapped: 172596 kB' 'Shmem: 6271304 kB' 'KReclaimable: 185956 kB' 'Slab: 550260 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364304 kB' 'KernelStack: 13168 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 7899404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196352 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.066 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.066 05:57:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.067 05:57:58 -- setup/common.sh@33 -- # echo 0 00:02:52.067 05:57:58 -- setup/common.sh@33 -- # return 0 00:02:52.067 05:57:58 -- setup/hugepages.sh@99 -- # surp=0 00:02:52.067 05:57:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:52.067 05:57:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:52.067 05:57:58 -- setup/common.sh@18 -- # local node= 00:02:52.067 05:57:58 -- setup/common.sh@19 -- # local var val 00:02:52.067 05:57:58 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.067 05:57:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.067 05:57:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.067 05:57:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.067 05:57:58 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.067 05:57:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45912980 kB' 'MemAvailable: 49413240 kB' 'Buffers: 2704 kB' 'Cached: 10169516 kB' 'SwapCached: 0 kB' 'Active: 7177860 kB' 'Inactive: 3506552 kB' 'Active(anon): 6783508 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515332 kB' 'Mapped: 172520 kB' 'Shmem: 6271316 kB' 'KReclaimable: 185956 kB' 'Slab: 550292 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364336 kB' 'KernelStack: 13312 kB' 'PageTables: 10060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 7899428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196448 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.067 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.067 05:57:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.068 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.068 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.069 05:57:58 -- setup/common.sh@33 -- # echo 0 00:02:52.069 05:57:58 -- setup/common.sh@33 -- # return 0 00:02:52.069 05:57:58 -- setup/hugepages.sh@100 -- # resv=0 00:02:52.069 05:57:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:52.069 nr_hugepages=1025 00:02:52.069 05:57:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:52.069 resv_hugepages=0 00:02:52.069 05:57:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:52.069 surplus_hugepages=0 00:02:52.069 05:57:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:52.069 anon_hugepages=0 00:02:52.069 05:57:58 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:52.069 05:57:58 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:52.069 05:57:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:52.069 05:57:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:52.069 05:57:58 -- setup/common.sh@18 -- # local node= 00:02:52.069 05:57:58 -- setup/common.sh@19 -- # local var val 00:02:52.069 05:57:58 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.069 05:57:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.069 05:57:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.069 05:57:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.069 05:57:58 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.069 05:57:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45912880 kB' 'MemAvailable: 49413140 kB' 'Buffers: 2704 kB' 'Cached: 10169528 kB' 'SwapCached: 0 kB' 'Active: 7177812 kB' 'Inactive: 3506552 kB' 'Active(anon): 6783460 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515304 kB' 'Mapped: 172520 kB' 'Shmem: 6271328 kB' 'KReclaimable: 185956 kB' 'Slab: 550252 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364296 kB' 'KernelStack: 13296 kB' 'PageTables: 9408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 7899432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196384 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.069 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.069 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.070 05:57:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.070 05:57:58 -- setup/common.sh@33 -- # echo 1025 00:02:52.070 05:57:58 -- setup/common.sh@33 -- # return 0 00:02:52.070 05:57:58 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:52.070 05:57:58 -- setup/hugepages.sh@112 -- # get_nodes 00:02:52.070 05:57:58 -- setup/hugepages.sh@27 -- # local node 00:02:52.070 05:57:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.070 05:57:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:52.070 05:57:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.070 05:57:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:52.070 05:57:58 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:52.070 05:57:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:52.070 05:57:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.070 05:57:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.070 05:57:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:52.070 05:57:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.070 05:57:58 -- setup/common.sh@18 -- # local node=0 00:02:52.070 05:57:58 -- setup/common.sh@19 -- # local var val 00:02:52.070 05:57:58 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.070 05:57:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.070 05:57:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:52.070 05:57:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:52.070 05:57:58 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.070 05:57:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.070 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28816352 kB' 'MemUsed: 4013532 kB' 'SwapCached: 0 kB' 'Active: 1891816 kB' 'Inactive: 108696 kB' 'Active(anon): 1780928 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1751088 kB' 'Mapped: 47672 kB' 'AnonPages: 252524 kB' 'Shmem: 1531504 kB' 'KernelStack: 8504 kB' 'PageTables: 6556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84212 kB' 'Slab: 301528 kB' 'SReclaimable: 84212 kB' 'SUnreclaim: 217316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.071 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.071 05:57:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.071 05:57:58 -- setup/common.sh@33 -- # echo 0 00:02:52.071 05:57:58 -- setup/common.sh@33 -- # return 0 00:02:52.071 05:57:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.071 05:57:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.071 05:57:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.071 05:57:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:52.071 05:57:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.071 05:57:58 -- setup/common.sh@18 -- # local node=1 00:02:52.071 05:57:58 -- setup/common.sh@19 -- # local var val 00:02:52.071 05:57:58 -- setup/common.sh@20 -- # local mem_f mem 00:02:52.071 05:57:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.071 05:57:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:52.071 05:57:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:52.071 05:57:58 -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.072 05:57:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 17094196 kB' 'MemUsed: 10617628 kB' 'SwapCached: 0 kB' 'Active: 5287036 kB' 'Inactive: 3397856 kB' 'Active(anon): 5003572 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8421176 kB' 'Mapped: 124908 kB' 'AnonPages: 263808 kB' 'Shmem: 4739856 kB' 'KernelStack: 4856 kB' 'PageTables: 3184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101744 kB' 'Slab: 248724 kB' 'SReclaimable: 101744 kB' 'SUnreclaim: 146980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # continue 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # IFS=': ' 00:02:52.072 05:57:58 -- setup/common.sh@31 -- # read -r var val _ 00:02:52.072 05:57:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.072 05:57:58 -- setup/common.sh@33 -- # echo 0 00:02:52.072 05:57:58 -- setup/common.sh@33 -- # return 0 00:02:52.072 05:57:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.072 05:57:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.072 05:57:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.072 05:57:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.072 05:57:58 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:52.072 node0=512 expecting 513 00:02:52.072 05:57:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.072 05:57:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.072 05:57:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.072 05:57:58 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:52.072 node1=513 expecting 512 00:02:52.072 05:57:58 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:52.072 00:02:52.072 real 0m1.509s 00:02:52.073 user 0m0.649s 00:02:52.073 sys 0m0.828s 00:02:52.073 05:57:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:52.073 05:57:58 -- common/autotest_common.sh@10 -- # set +x 00:02:52.073 ************************************ 00:02:52.073 END TEST odd_alloc 00:02:52.073 ************************************ 00:02:52.073 05:57:58 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:52.073 05:57:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:52.073 05:57:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:52.073 05:57:58 -- common/autotest_common.sh@10 -- # set +x 00:02:52.073 ************************************ 00:02:52.073 START TEST custom_alloc 00:02:52.073 ************************************ 00:02:52.073 05:57:58 -- common/autotest_common.sh@1104 -- # custom_alloc 00:02:52.073 05:57:58 -- setup/hugepages.sh@167 -- # local IFS=, 00:02:52.073 05:57:58 -- setup/hugepages.sh@169 -- # local node 00:02:52.073 05:57:58 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:52.073 05:57:58 -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:52.073 05:57:58 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:52.073 05:57:58 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:52.073 05:57:58 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:52.073 05:57:58 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:52.073 05:57:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:52.073 05:57:58 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:52.073 05:57:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:52.073 05:57:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:52.073 05:57:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:52.073 05:57:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:52.073 05:57:58 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:52.073 05:57:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:52.073 05:57:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:52.073 05:57:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:52.073 05:57:58 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:52.073 05:57:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.073 05:57:58 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:52.073 05:57:58 -- setup/hugepages.sh@83 -- # : 256 00:02:52.073 05:57:58 -- setup/hugepages.sh@84 -- # : 1 00:02:52.073 05:57:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.073 05:57:58 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:52.073 05:57:58 -- setup/hugepages.sh@83 -- # : 0 00:02:52.073 05:57:58 -- setup/hugepages.sh@84 -- # : 0 00:02:52.073 05:57:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:52.073 05:57:58 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:52.073 05:57:58 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:52.073 05:57:58 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:52.073 05:57:58 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:52.073 05:57:58 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:52.073 05:57:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:52.073 05:57:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:52.073 05:57:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:52.073 05:57:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:52.073 05:57:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:52.073 05:57:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:52.073 05:57:58 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:52.073 05:57:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:52.073 05:57:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:52.073 05:57:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:52.073 05:57:58 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:52.073 05:57:58 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:52.073 05:57:58 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:52.073 05:57:58 -- setup/hugepages.sh@78 -- # return 0 00:02:52.073 05:57:58 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:52.073 05:57:58 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:52.073 05:57:58 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:52.073 05:57:58 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:52.073 05:57:58 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:52.073 05:57:58 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:52.073 05:57:58 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:52.073 05:57:58 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:52.073 05:57:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:52.073 05:57:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:52.073 05:57:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:52.073 05:57:58 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:52.073 05:57:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:52.073 05:57:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:52.073 05:57:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:52.073 05:57:58 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:52.073 05:57:58 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:52.073 05:57:58 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:52.073 05:57:58 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:52.073 05:57:58 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:52.073 05:57:58 -- setup/hugepages.sh@78 -- # return 0 00:02:52.073 05:57:58 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:52.073 05:57:58 -- setup/hugepages.sh@187 -- # setup output 00:02:52.073 05:57:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.073 05:57:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:53.446 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:53.446 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:53.446 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:53.446 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:53.446 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:53.446 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:53.446 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:53.446 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:53.446 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:53.446 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:53.446 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:53.447 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:53.447 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:53.447 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:53.447 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:53.447 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:53.447 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:53.447 05:57:59 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:53.447 05:57:59 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:53.447 05:57:59 -- setup/hugepages.sh@89 -- # local node 00:02:53.447 05:57:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:53.447 05:57:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:53.447 05:57:59 -- setup/hugepages.sh@92 -- # local surp 00:02:53.447 05:57:59 -- setup/hugepages.sh@93 -- # local resv 00:02:53.447 05:57:59 -- setup/hugepages.sh@94 -- # local anon 00:02:53.447 05:57:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:53.447 05:57:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:53.447 05:57:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:53.447 05:57:59 -- setup/common.sh@18 -- # local node= 00:02:53.447 05:57:59 -- setup/common.sh@19 -- # local var val 00:02:53.447 05:57:59 -- setup/common.sh@20 -- # local mem_f mem 00:02:53.447 05:57:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.447 05:57:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.447 05:57:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.447 05:57:59 -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.447 05:57:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44868920 kB' 'MemAvailable: 48369180 kB' 'Buffers: 2704 kB' 'Cached: 10169596 kB' 'SwapCached: 0 kB' 'Active: 7175804 kB' 'Inactive: 3506552 kB' 'Active(anon): 6781452 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513236 kB' 'Mapped: 172836 kB' 'Shmem: 6271396 kB' 'KReclaimable: 185956 kB' 'Slab: 550264 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364308 kB' 'KernelStack: 12768 kB' 'PageTables: 7388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 7895392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.447 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.447 05:57:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.448 05:57:59 -- setup/common.sh@33 -- # echo 0 00:02:53.448 05:57:59 -- setup/common.sh@33 -- # return 0 00:02:53.448 05:57:59 -- setup/hugepages.sh@97 -- # anon=0 00:02:53.448 05:57:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:53.448 05:57:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.448 05:57:59 -- setup/common.sh@18 -- # local node= 00:02:53.448 05:57:59 -- setup/common.sh@19 -- # local var val 00:02:53.448 05:57:59 -- setup/common.sh@20 -- # local mem_f mem 00:02:53.448 05:57:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.448 05:57:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.448 05:57:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.448 05:57:59 -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.448 05:57:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44869008 kB' 'MemAvailable: 48369268 kB' 'Buffers: 2704 kB' 'Cached: 10169596 kB' 'SwapCached: 0 kB' 'Active: 7176012 kB' 'Inactive: 3506552 kB' 'Active(anon): 6781660 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513444 kB' 'Mapped: 172836 kB' 'Shmem: 6271396 kB' 'KReclaimable: 185956 kB' 'Slab: 550248 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364292 kB' 'KernelStack: 12736 kB' 'PageTables: 7288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 7895404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.448 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.448 05:57:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.449 05:57:59 -- setup/common.sh@33 -- # echo 0 00:02:53.449 05:57:59 -- setup/common.sh@33 -- # return 0 00:02:53.449 05:57:59 -- setup/hugepages.sh@99 -- # surp=0 00:02:53.449 05:57:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:53.449 05:57:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:53.449 05:57:59 -- setup/common.sh@18 -- # local node= 00:02:53.449 05:57:59 -- setup/common.sh@19 -- # local var val 00:02:53.449 05:57:59 -- setup/common.sh@20 -- # local mem_f mem 00:02:53.449 05:57:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.449 05:57:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.449 05:57:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.449 05:57:59 -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.449 05:57:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44868976 kB' 'MemAvailable: 48369236 kB' 'Buffers: 2704 kB' 'Cached: 10169616 kB' 'SwapCached: 0 kB' 'Active: 7175556 kB' 'Inactive: 3506552 kB' 'Active(anon): 6781204 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 512992 kB' 'Mapped: 172664 kB' 'Shmem: 6271416 kB' 'KReclaimable: 185956 kB' 'Slab: 550300 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364344 kB' 'KernelStack: 12816 kB' 'PageTables: 7588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 7895792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.449 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.449 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.450 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.450 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.710 05:57:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.710 05:57:59 -- setup/common.sh@33 -- # echo 0 00:02:53.710 05:57:59 -- setup/common.sh@33 -- # return 0 00:02:53.710 05:57:59 -- setup/hugepages.sh@100 -- # resv=0 00:02:53.710 05:57:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:53.710 nr_hugepages=1536 00:02:53.710 05:57:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:53.710 resv_hugepages=0 00:02:53.710 05:57:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:53.710 surplus_hugepages=0 00:02:53.710 05:57:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:53.710 anon_hugepages=0 00:02:53.710 05:57:59 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:53.710 05:57:59 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:53.710 05:57:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:53.710 05:57:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:53.710 05:57:59 -- setup/common.sh@18 -- # local node= 00:02:53.710 05:57:59 -- setup/common.sh@19 -- # local var val 00:02:53.710 05:57:59 -- setup/common.sh@20 -- # local mem_f mem 00:02:53.710 05:57:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.710 05:57:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.710 05:57:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.710 05:57:59 -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.710 05:57:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.710 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 44868752 kB' 'MemAvailable: 48369012 kB' 'Buffers: 2704 kB' 'Cached: 10169628 kB' 'SwapCached: 0 kB' 'Active: 7175720 kB' 'Inactive: 3506552 kB' 'Active(anon): 6781368 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513208 kB' 'Mapped: 172664 kB' 'Shmem: 6271428 kB' 'KReclaimable: 185956 kB' 'Slab: 550300 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364344 kB' 'KernelStack: 12832 kB' 'PageTables: 7624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 7895804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.711 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.711 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.712 05:57:59 -- setup/common.sh@33 -- # echo 1536 00:02:53.712 05:57:59 -- setup/common.sh@33 -- # return 0 00:02:53.712 05:57:59 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:53.712 05:57:59 -- setup/hugepages.sh@112 -- # get_nodes 00:02:53.712 05:57:59 -- setup/hugepages.sh@27 -- # local node 00:02:53.712 05:57:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.712 05:57:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:53.712 05:57:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.712 05:57:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:53.712 05:57:59 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:53.712 05:57:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:53.712 05:57:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.712 05:57:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.712 05:57:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:53.712 05:57:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.712 05:57:59 -- setup/common.sh@18 -- # local node=0 00:02:53.712 05:57:59 -- setup/common.sh@19 -- # local var val 00:02:53.712 05:57:59 -- setup/common.sh@20 -- # local mem_f mem 00:02:53.712 05:57:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.712 05:57:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:53.712 05:57:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:53.712 05:57:59 -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.712 05:57:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:57:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 28823988 kB' 'MemUsed: 4005896 kB' 'SwapCached: 0 kB' 'Active: 1889048 kB' 'Inactive: 108696 kB' 'Active(anon): 1778160 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1751088 kB' 'Mapped: 47748 kB' 'AnonPages: 249808 kB' 'Shmem: 1531504 kB' 'KernelStack: 7976 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84212 kB' 'Slab: 301636 kB' 'SReclaimable: 84212 kB' 'SUnreclaim: 217424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:57:59 -- setup/common.sh@32 -- # continue 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:57:59 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.712 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.712 05:58:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@33 -- # echo 0 00:02:53.713 05:58:00 -- setup/common.sh@33 -- # return 0 00:02:53.713 05:58:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.713 05:58:00 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.713 05:58:00 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.713 05:58:00 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:53.713 05:58:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.713 05:58:00 -- setup/common.sh@18 -- # local node=1 00:02:53.713 05:58:00 -- setup/common.sh@19 -- # local var val 00:02:53.713 05:58:00 -- setup/common.sh@20 -- # local mem_f mem 00:02:53.713 05:58:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.713 05:58:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:53.713 05:58:00 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:53.713 05:58:00 -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.713 05:58:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16044924 kB' 'MemUsed: 11666900 kB' 'SwapCached: 0 kB' 'Active: 5287008 kB' 'Inactive: 3397856 kB' 'Active(anon): 5003544 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3397856 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8421272 kB' 'Mapped: 124916 kB' 'AnonPages: 263716 kB' 'Shmem: 4739952 kB' 'KernelStack: 4872 kB' 'PageTables: 3256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 101744 kB' 'Slab: 248664 kB' 'SReclaimable: 101744 kB' 'SUnreclaim: 146920 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.713 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.713 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # continue 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # IFS=': ' 00:02:53.714 05:58:00 -- setup/common.sh@31 -- # read -r var val _ 00:02:53.714 05:58:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.714 05:58:00 -- setup/common.sh@33 -- # echo 0 00:02:53.714 05:58:00 -- setup/common.sh@33 -- # return 0 00:02:53.714 05:58:00 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.714 05:58:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.714 05:58:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.714 05:58:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.714 05:58:00 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:53.714 node0=512 expecting 512 00:02:53.714 05:58:00 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.714 05:58:00 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.714 05:58:00 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.714 05:58:00 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:53.714 node1=1024 expecting 1024 00:02:53.714 05:58:00 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:53.714 00:02:53.714 real 0m1.538s 00:02:53.714 user 0m0.663s 00:02:53.714 sys 0m0.842s 00:02:53.714 05:58:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:53.714 05:58:00 -- common/autotest_common.sh@10 -- # set +x 00:02:53.714 ************************************ 00:02:53.714 END TEST custom_alloc 00:02:53.714 ************************************ 00:02:53.714 05:58:00 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:53.714 05:58:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:53.714 05:58:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:53.714 05:58:00 -- common/autotest_common.sh@10 -- # set +x 00:02:53.714 ************************************ 00:02:53.714 START TEST no_shrink_alloc 00:02:53.714 ************************************ 00:02:53.714 05:58:00 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:02:53.714 05:58:00 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:53.714 05:58:00 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:53.714 05:58:00 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:53.714 05:58:00 -- setup/hugepages.sh@51 -- # shift 00:02:53.714 05:58:00 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:53.714 05:58:00 -- setup/hugepages.sh@52 -- # local node_ids 00:02:53.714 05:58:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:53.714 05:58:00 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:53.714 05:58:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:53.714 05:58:00 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:53.714 05:58:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:53.714 05:58:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:53.714 05:58:00 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:53.714 05:58:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:53.714 05:58:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:53.714 05:58:00 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:53.714 05:58:00 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:53.714 05:58:00 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:53.714 05:58:00 -- setup/hugepages.sh@73 -- # return 0 00:02:53.714 05:58:00 -- setup/hugepages.sh@198 -- # setup output 00:02:53.714 05:58:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.714 05:58:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:55.095 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:55.095 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:55.095 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:55.095 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:55.095 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:55.095 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:55.095 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:55.095 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:55.095 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:55.095 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:55.095 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:55.095 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:55.095 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:55.095 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:55.095 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:55.095 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:55.095 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:55.095 05:58:01 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:55.095 05:58:01 -- setup/hugepages.sh@89 -- # local node 00:02:55.095 05:58:01 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:55.095 05:58:01 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:55.095 05:58:01 -- setup/hugepages.sh@92 -- # local surp 00:02:55.096 05:58:01 -- setup/hugepages.sh@93 -- # local resv 00:02:55.096 05:58:01 -- setup/hugepages.sh@94 -- # local anon 00:02:55.096 05:58:01 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:55.096 05:58:01 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:55.096 05:58:01 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:55.096 05:58:01 -- setup/common.sh@18 -- # local node= 00:02:55.096 05:58:01 -- setup/common.sh@19 -- # local var val 00:02:55.096 05:58:01 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.096 05:58:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.096 05:58:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.096 05:58:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.096 05:58:01 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.096 05:58:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45888196 kB' 'MemAvailable: 49388456 kB' 'Buffers: 2704 kB' 'Cached: 10169692 kB' 'SwapCached: 0 kB' 'Active: 7179448 kB' 'Inactive: 3506552 kB' 'Active(anon): 6785096 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516832 kB' 'Mapped: 173156 kB' 'Shmem: 6271492 kB' 'KReclaimable: 185956 kB' 'Slab: 550212 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364256 kB' 'KernelStack: 12816 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7899984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.096 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.096 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:55.097 05:58:01 -- setup/common.sh@33 -- # echo 0 00:02:55.097 05:58:01 -- setup/common.sh@33 -- # return 0 00:02:55.097 05:58:01 -- setup/hugepages.sh@97 -- # anon=0 00:02:55.097 05:58:01 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:55.097 05:58:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.097 05:58:01 -- setup/common.sh@18 -- # local node= 00:02:55.097 05:58:01 -- setup/common.sh@19 -- # local var val 00:02:55.097 05:58:01 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.097 05:58:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.097 05:58:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.097 05:58:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.097 05:58:01 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.097 05:58:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45884320 kB' 'MemAvailable: 49384580 kB' 'Buffers: 2704 kB' 'Cached: 10169696 kB' 'SwapCached: 0 kB' 'Active: 7182432 kB' 'Inactive: 3506552 kB' 'Active(anon): 6788080 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519840 kB' 'Mapped: 173584 kB' 'Shmem: 6271496 kB' 'KReclaimable: 185956 kB' 'Slab: 550264 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364308 kB' 'KernelStack: 12816 kB' 'PageTables: 7612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7902120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196116 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.097 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.097 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.098 05:58:01 -- setup/common.sh@33 -- # echo 0 00:02:55.098 05:58:01 -- setup/common.sh@33 -- # return 0 00:02:55.098 05:58:01 -- setup/hugepages.sh@99 -- # surp=0 00:02:55.098 05:58:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:55.098 05:58:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:55.098 05:58:01 -- setup/common.sh@18 -- # local node= 00:02:55.098 05:58:01 -- setup/common.sh@19 -- # local var val 00:02:55.098 05:58:01 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.098 05:58:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.098 05:58:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.098 05:58:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.098 05:58:01 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.098 05:58:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45884648 kB' 'MemAvailable: 49384908 kB' 'Buffers: 2704 kB' 'Cached: 10169708 kB' 'SwapCached: 0 kB' 'Active: 7176084 kB' 'Inactive: 3506552 kB' 'Active(anon): 6781732 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513412 kB' 'Mapped: 173096 kB' 'Shmem: 6271508 kB' 'KReclaimable: 185956 kB' 'Slab: 550280 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364324 kB' 'KernelStack: 12832 kB' 'PageTables: 7656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7896016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.098 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.098 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.099 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.099 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:55.100 05:58:01 -- setup/common.sh@33 -- # echo 0 00:02:55.100 05:58:01 -- setup/common.sh@33 -- # return 0 00:02:55.100 05:58:01 -- setup/hugepages.sh@100 -- # resv=0 00:02:55.100 05:58:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:55.100 nr_hugepages=1024 00:02:55.100 05:58:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:55.100 resv_hugepages=0 00:02:55.100 05:58:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:55.100 surplus_hugepages=0 00:02:55.100 05:58:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:55.100 anon_hugepages=0 00:02:55.100 05:58:01 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:55.100 05:58:01 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:55.100 05:58:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:55.100 05:58:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:55.100 05:58:01 -- setup/common.sh@18 -- # local node= 00:02:55.100 05:58:01 -- setup/common.sh@19 -- # local var val 00:02:55.100 05:58:01 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.100 05:58:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.100 05:58:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:55.100 05:58:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:55.100 05:58:01 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.100 05:58:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45889764 kB' 'MemAvailable: 49390024 kB' 'Buffers: 2704 kB' 'Cached: 10169720 kB' 'SwapCached: 0 kB' 'Active: 7176380 kB' 'Inactive: 3506552 kB' 'Active(anon): 6782028 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 513780 kB' 'Mapped: 172680 kB' 'Shmem: 6271520 kB' 'KReclaimable: 185956 kB' 'Slab: 550280 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364324 kB' 'KernelStack: 12848 kB' 'PageTables: 7688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7896032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.100 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.100 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:55.101 05:58:01 -- setup/common.sh@33 -- # echo 1024 00:02:55.101 05:58:01 -- setup/common.sh@33 -- # return 0 00:02:55.101 05:58:01 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:55.101 05:58:01 -- setup/hugepages.sh@112 -- # get_nodes 00:02:55.101 05:58:01 -- setup/hugepages.sh@27 -- # local node 00:02:55.101 05:58:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.101 05:58:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:55.101 05:58:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:55.101 05:58:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:55.101 05:58:01 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:55.101 05:58:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:55.101 05:58:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:55.101 05:58:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:55.101 05:58:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:55.101 05:58:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:55.101 05:58:01 -- setup/common.sh@18 -- # local node=0 00:02:55.101 05:58:01 -- setup/common.sh@19 -- # local var val 00:02:55.101 05:58:01 -- setup/common.sh@20 -- # local mem_f mem 00:02:55.101 05:58:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:55.101 05:58:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:55.101 05:58:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:55.101 05:58:01 -- setup/common.sh@28 -- # mapfile -t mem 00:02:55.101 05:58:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.101 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.101 05:58:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27767852 kB' 'MemUsed: 5062032 kB' 'SwapCached: 0 kB' 'Active: 1890196 kB' 'Inactive: 108696 kB' 'Active(anon): 1779308 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1751148 kB' 'Mapped: 47748 kB' 'AnonPages: 250976 kB' 'Shmem: 1531564 kB' 'KernelStack: 8008 kB' 'PageTables: 4532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84212 kB' 'Slab: 301636 kB' 'SReclaimable: 84212 kB' 'SUnreclaim: 217424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.101 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # continue 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # IFS=': ' 00:02:55.102 05:58:01 -- setup/common.sh@31 -- # read -r var val _ 00:02:55.102 05:58:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:55.102 05:58:01 -- setup/common.sh@33 -- # echo 0 00:02:55.102 05:58:01 -- setup/common.sh@33 -- # return 0 00:02:55.102 05:58:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:55.102 05:58:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:55.102 05:58:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:55.102 05:58:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:55.102 05:58:01 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:55.102 node0=1024 expecting 1024 00:02:55.102 05:58:01 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:55.102 05:58:01 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:55.102 05:58:01 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:55.102 05:58:01 -- setup/hugepages.sh@202 -- # setup output 00:02:55.102 05:58:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:55.102 05:58:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:56.481 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:56.481 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:56.481 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:56.481 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:56.481 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:56.481 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:56.481 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:56.481 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:56.481 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:56.481 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:56.481 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:56.481 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:56.481 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:56.481 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:56.481 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:56.481 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:56.481 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:56.481 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:56.481 05:58:02 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:56.481 05:58:02 -- setup/hugepages.sh@89 -- # local node 00:02:56.481 05:58:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:56.481 05:58:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:56.481 05:58:02 -- setup/hugepages.sh@92 -- # local surp 00:02:56.481 05:58:02 -- setup/hugepages.sh@93 -- # local resv 00:02:56.481 05:58:02 -- setup/hugepages.sh@94 -- # local anon 00:02:56.481 05:58:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:56.481 05:58:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:56.481 05:58:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:56.481 05:58:02 -- setup/common.sh@18 -- # local node= 00:02:56.481 05:58:02 -- setup/common.sh@19 -- # local var val 00:02:56.481 05:58:02 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.481 05:58:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.481 05:58:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.481 05:58:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.481 05:58:02 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.481 05:58:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.481 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.481 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.482 05:58:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45894760 kB' 'MemAvailable: 49395020 kB' 'Buffers: 2704 kB' 'Cached: 10169772 kB' 'SwapCached: 0 kB' 'Active: 7178184 kB' 'Inactive: 3506552 kB' 'Active(anon): 6783832 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515524 kB' 'Mapped: 172668 kB' 'Shmem: 6271572 kB' 'KReclaimable: 185956 kB' 'Slab: 550028 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364072 kB' 'KernelStack: 13312 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7900240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196496 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:56.482 05:58:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.482 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.482 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.482 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.482 05:58:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.482 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.482 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.482 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.482 05:58:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.482 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.482 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.482 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.482 05:58:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.482 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.482 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.482 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.482 05:58:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.482 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.482 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.482 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.482 05:58:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.482 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.482 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.482 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.482 05:58:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.482 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.482 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.482 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.482 05:58:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.483 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.483 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.484 05:58:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.484 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.484 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.484 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.484 05:58:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.484 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.484 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.484 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.484 05:58:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.484 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.484 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.484 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.484 05:58:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.484 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.484 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.484 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.484 05:58:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.484 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.484 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.484 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.484 05:58:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.485 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.485 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.485 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.485 05:58:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.485 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.485 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.485 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.485 05:58:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.485 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.485 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.485 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.485 05:58:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.485 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.485 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.485 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.485 05:58:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.485 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.485 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.485 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.485 05:58:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.485 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.485 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.485 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.485 05:58:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.485 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.485 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.485 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.485 05:58:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.485 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.486 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.486 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.486 05:58:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.486 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.486 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.486 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.486 05:58:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.486 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.486 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.486 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.486 05:58:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.486 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.486 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.486 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.486 05:58:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.486 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.486 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.486 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.486 05:58:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.487 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.487 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.487 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.487 05:58:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.487 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.487 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.487 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.487 05:58:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.487 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.487 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.487 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.487 05:58:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.487 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.487 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.487 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.487 05:58:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.487 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.487 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.487 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.487 05:58:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:56.487 05:58:02 -- setup/common.sh@33 -- # echo 0 00:02:56.488 05:58:02 -- setup/common.sh@33 -- # return 0 00:02:56.488 05:58:02 -- setup/hugepages.sh@97 -- # anon=0 00:02:56.488 05:58:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:56.488 05:58:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.488 05:58:02 -- setup/common.sh@18 -- # local node= 00:02:56.488 05:58:02 -- setup/common.sh@19 -- # local var val 00:02:56.488 05:58:02 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.488 05:58:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.488 05:58:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.488 05:58:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.488 05:58:02 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.488 05:58:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.488 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.488 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.489 05:58:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45903960 kB' 'MemAvailable: 49404220 kB' 'Buffers: 2704 kB' 'Cached: 10169772 kB' 'SwapCached: 0 kB' 'Active: 7179124 kB' 'Inactive: 3506552 kB' 'Active(anon): 6784772 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 516952 kB' 'Mapped: 172744 kB' 'Shmem: 6271572 kB' 'KReclaimable: 185956 kB' 'Slab: 550092 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364136 kB' 'KernelStack: 13280 kB' 'PageTables: 9248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7900252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196352 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:56.489 05:58:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.489 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.489 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.489 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.489 05:58:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.489 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.489 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.489 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.489 05:58:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.489 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.489 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.489 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.489 05:58:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.489 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.489 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.489 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.489 05:58:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.489 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.489 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.489 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.489 05:58:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.489 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.489 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.489 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.489 05:58:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.489 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.489 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.489 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.490 05:58:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.490 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.490 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.490 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.490 05:58:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.490 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.490 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.490 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.490 05:58:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.490 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.490 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.491 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.491 05:58:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.491 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.491 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.491 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.491 05:58:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.491 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.491 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.491 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.491 05:58:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.491 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.491 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.491 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.491 05:58:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.491 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.491 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.491 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.492 05:58:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.493 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.493 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.493 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.493 05:58:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.493 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.493 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.493 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.493 05:58:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.493 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.493 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.493 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.493 05:58:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.493 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.493 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.493 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.493 05:58:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.493 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.493 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.493 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.493 05:58:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.493 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.493 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.493 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.493 05:58:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.494 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.494 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.494 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.494 05:58:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.494 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.494 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.494 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.494 05:58:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.494 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.494 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.494 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.494 05:58:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.494 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.494 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.494 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.494 05:58:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.494 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.494 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.495 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.495 05:58:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.495 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.495 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.495 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.495 05:58:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.495 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.495 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.495 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.495 05:58:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.495 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.495 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.495 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.495 05:58:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.495 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.495 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.495 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.495 05:58:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.495 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.495 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.495 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.495 05:58:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.495 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.495 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.495 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.495 05:58:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.495 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.495 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.495 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.495 05:58:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.496 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.496 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.496 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.496 05:58:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.496 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.496 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.496 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.496 05:58:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.496 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.496 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.496 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.496 05:58:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.496 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.496 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.496 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.496 05:58:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.496 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.496 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.496 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.496 05:58:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.496 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.496 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.496 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.496 05:58:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.496 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.496 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.497 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.497 05:58:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.497 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.497 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.497 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.497 05:58:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.497 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.497 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.497 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.497 05:58:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.497 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.497 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.497 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.497 05:58:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.497 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.497 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.497 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.497 05:58:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.497 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.497 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.497 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.498 05:58:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.498 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.498 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.498 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.498 05:58:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.498 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.498 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.498 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.498 05:58:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.498 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.498 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.498 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.498 05:58:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.498 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.499 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.499 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.499 05:58:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.499 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.499 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.499 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.499 05:58:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.499 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.499 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.499 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.499 05:58:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.499 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.499 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.499 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.499 05:58:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.499 05:58:02 -- setup/common.sh@33 -- # echo 0 00:02:56.499 05:58:02 -- setup/common.sh@33 -- # return 0 00:02:56.499 05:58:02 -- setup/hugepages.sh@99 -- # surp=0 00:02:56.499 05:58:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:56.499 05:58:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:56.499 05:58:02 -- setup/common.sh@18 -- # local node= 00:02:56.499 05:58:02 -- setup/common.sh@19 -- # local var val 00:02:56.499 05:58:02 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.499 05:58:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.499 05:58:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.500 05:58:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.500 05:58:02 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.500 05:58:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.500 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.500 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.500 05:58:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45905308 kB' 'MemAvailable: 49405568 kB' 'Buffers: 2704 kB' 'Cached: 10169788 kB' 'SwapCached: 0 kB' 'Active: 7178036 kB' 'Inactive: 3506552 kB' 'Active(anon): 6783684 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515360 kB' 'Mapped: 172768 kB' 'Shmem: 6271588 kB' 'KReclaimable: 185956 kB' 'Slab: 550092 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364136 kB' 'KernelStack: 13232 kB' 'PageTables: 9328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7898880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196368 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:56.500 05:58:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.500 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.500 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.500 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.500 05:58:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.500 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.501 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.501 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.501 05:58:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.501 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.501 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.501 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.501 05:58:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.501 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.501 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.501 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.501 05:58:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.501 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.501 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.501 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.501 05:58:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.501 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.501 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.501 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.501 05:58:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.501 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.501 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.501 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.501 05:58:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.501 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.501 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.501 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.502 05:58:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.502 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.502 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.502 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.502 05:58:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.502 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.502 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.502 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.502 05:58:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.502 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.502 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.502 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.502 05:58:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.502 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.502 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.502 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.502 05:58:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.502 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.502 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.502 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.502 05:58:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.502 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.502 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.502 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.502 05:58:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.502 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.502 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.502 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.502 05:58:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.502 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.503 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.503 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.503 05:58:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.503 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.503 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.503 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.503 05:58:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.503 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.503 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.503 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.503 05:58:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.503 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.503 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.503 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.503 05:58:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.503 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.503 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.503 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.503 05:58:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.503 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.503 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.503 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.503 05:58:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.503 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.503 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.503 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.503 05:58:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.504 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.504 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.504 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.504 05:58:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.504 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.504 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.504 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.504 05:58:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.504 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.504 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.504 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.504 05:58:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.504 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.504 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.504 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.504 05:58:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.504 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.504 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.504 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.504 05:58:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.504 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.504 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.504 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.504 05:58:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.504 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.504 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.504 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.504 05:58:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.504 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.504 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.504 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.507 05:58:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.507 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.507 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.507 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.507 05:58:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.507 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.507 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.507 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.507 05:58:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.507 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.507 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.507 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.507 05:58:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.507 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.507 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.507 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.507 05:58:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.507 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.507 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.507 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.507 05:58:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.507 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.507 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.507 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.507 05:58:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.508 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.508 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.508 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.508 05:58:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.508 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.508 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.508 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.508 05:58:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.508 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.508 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.508 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.508 05:58:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.508 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.508 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.508 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.508 05:58:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.508 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.508 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.508 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.508 05:58:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.508 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.508 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.508 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.508 05:58:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.508 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.508 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.508 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.508 05:58:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.508 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.508 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.508 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.508 05:58:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.508 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.508 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.508 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.509 05:58:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.509 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.509 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.509 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.509 05:58:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.509 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.509 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.509 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.509 05:58:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.509 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.509 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.509 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.509 05:58:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.509 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.509 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.509 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.509 05:58:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.509 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.509 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.509 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.509 05:58:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:56.509 05:58:02 -- setup/common.sh@33 -- # echo 0 00:02:56.509 05:58:02 -- setup/common.sh@33 -- # return 0 00:02:56.509 05:58:02 -- setup/hugepages.sh@100 -- # resv=0 00:02:56.509 05:58:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:56.509 nr_hugepages=1024 00:02:56.509 05:58:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:56.509 resv_hugepages=0 00:02:56.509 05:58:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:56.509 surplus_hugepages=0 00:02:56.509 05:58:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:56.509 anon_hugepages=0 00:02:56.509 05:58:02 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.510 05:58:02 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:56.510 05:58:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:56.510 05:58:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:56.510 05:58:02 -- setup/common.sh@18 -- # local node= 00:02:56.510 05:58:02 -- setup/common.sh@19 -- # local var val 00:02:56.510 05:58:02 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.510 05:58:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.510 05:58:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:56.510 05:58:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:56.510 05:58:02 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.510 05:58:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.510 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.510 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.511 05:58:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 45913128 kB' 'MemAvailable: 49413388 kB' 'Buffers: 2704 kB' 'Cached: 10169800 kB' 'SwapCached: 0 kB' 'Active: 7176932 kB' 'Inactive: 3506552 kB' 'Active(anon): 6782580 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514252 kB' 'Mapped: 172700 kB' 'Shmem: 6271600 kB' 'KReclaimable: 185956 kB' 'Slab: 550092 kB' 'SReclaimable: 185956 kB' 'SUnreclaim: 364136 kB' 'KernelStack: 12848 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 7896240 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 34752 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1797724 kB' 'DirectMap2M: 15947776 kB' 'DirectMap1G: 51380224 kB' 00:02:56.511 05:58:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.511 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.511 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.511 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.511 05:58:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.511 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.511 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.511 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.511 05:58:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.512 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.512 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.512 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.512 05:58:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.512 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.512 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.512 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.512 05:58:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.512 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.512 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.512 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.513 05:58:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.513 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.514 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.514 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.514 05:58:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.514 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.514 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.514 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.514 05:58:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.514 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.514 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.514 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.514 05:58:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.514 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.514 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.514 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.514 05:58:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.514 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.515 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.515 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.515 05:58:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.515 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.515 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.515 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.515 05:58:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.515 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.515 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.515 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.515 05:58:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.515 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.515 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.515 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.515 05:58:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.515 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.515 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.515 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.515 05:58:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.516 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.516 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.516 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.516 05:58:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.516 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.516 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.516 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.516 05:58:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.516 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.516 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.516 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.516 05:58:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.516 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.516 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.516 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.517 05:58:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.517 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.517 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.517 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.517 05:58:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.517 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.517 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.517 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.517 05:58:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.517 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.517 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.517 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.517 05:58:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.517 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.517 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.517 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.517 05:58:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.517 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.517 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.517 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.517 05:58:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.517 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.517 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.517 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.517 05:58:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.517 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.517 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.517 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.517 05:58:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.517 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.517 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.517 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.518 05:58:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.518 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.518 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.518 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.518 05:58:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.518 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.518 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.518 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.518 05:58:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.518 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.518 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.518 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.518 05:58:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.518 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.518 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.518 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.518 05:58:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.518 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.518 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.518 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.518 05:58:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.518 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.518 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.518 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.518 05:58:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.518 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.518 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.519 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.519 05:58:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.519 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.519 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.519 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.519 05:58:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.520 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.520 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.520 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.520 05:58:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.520 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.520 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.520 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.520 05:58:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.520 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.520 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.520 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.520 05:58:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.520 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.520 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.520 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.520 05:58:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.520 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.520 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.520 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.520 05:58:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.520 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.520 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.520 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.521 05:58:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.521 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.521 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.521 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.521 05:58:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.521 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.521 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.521 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.521 05:58:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.521 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.521 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.521 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.521 05:58:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.521 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.521 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.521 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.521 05:58:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.521 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.521 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.521 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.521 05:58:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.521 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.521 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.521 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.521 05:58:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.521 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.521 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.521 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.522 05:58:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.522 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.522 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.522 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.522 05:58:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:56.522 05:58:02 -- setup/common.sh@33 -- # echo 1024 00:02:56.522 05:58:02 -- setup/common.sh@33 -- # return 0 00:02:56.522 05:58:02 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:56.522 05:58:02 -- setup/hugepages.sh@112 -- # get_nodes 00:02:56.522 05:58:02 -- setup/hugepages.sh@27 -- # local node 00:02:56.522 05:58:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.522 05:58:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:56.522 05:58:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:56.522 05:58:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:56.522 05:58:02 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:56.522 05:58:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:56.522 05:58:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:56.522 05:58:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:56.522 05:58:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:56.522 05:58:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:56.522 05:58:02 -- setup/common.sh@18 -- # local node=0 00:02:56.522 05:58:02 -- setup/common.sh@19 -- # local var val 00:02:56.523 05:58:02 -- setup/common.sh@20 -- # local mem_f mem 00:02:56.523 05:58:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:56.523 05:58:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:56.523 05:58:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:56.523 05:58:02 -- setup/common.sh@28 -- # mapfile -t mem 00:02:56.523 05:58:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27785264 kB' 'MemUsed: 5044620 kB' 'SwapCached: 0 kB' 'Active: 1889816 kB' 'Inactive: 108696 kB' 'Active(anon): 1778928 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 108696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1751224 kB' 'Mapped: 47748 kB' 'AnonPages: 250000 kB' 'Shmem: 1531640 kB' 'KernelStack: 7960 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84212 kB' 'Slab: 301744 kB' 'SReclaimable: 84212 kB' 'SUnreclaim: 217532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:02 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:02 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # continue 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # IFS=': ' 00:02:56.787 05:58:03 -- setup/common.sh@31 -- # read -r var val _ 00:02:56.787 05:58:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:56.787 05:58:03 -- setup/common.sh@33 -- # echo 0 00:02:56.787 05:58:03 -- setup/common.sh@33 -- # return 0 00:02:56.787 05:58:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:56.787 05:58:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:56.787 05:58:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:56.787 05:58:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:56.787 05:58:03 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:56.787 node0=1024 expecting 1024 00:02:56.787 05:58:03 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:56.787 00:02:56.787 real 0m2.941s 00:02:56.787 user 0m1.237s 00:02:56.787 sys 0m1.635s 00:02:56.787 05:58:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:56.787 05:58:03 -- common/autotest_common.sh@10 -- # set +x 00:02:56.787 ************************************ 00:02:56.787 END TEST no_shrink_alloc 00:02:56.787 ************************************ 00:02:56.787 05:58:03 -- setup/hugepages.sh@217 -- # clear_hp 00:02:56.787 05:58:03 -- setup/hugepages.sh@37 -- # local node hp 00:02:56.787 05:58:03 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:56.787 05:58:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:56.787 05:58:03 -- setup/hugepages.sh@41 -- # echo 0 00:02:56.787 05:58:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:56.787 05:58:03 -- setup/hugepages.sh@41 -- # echo 0 00:02:56.787 05:58:03 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:56.787 05:58:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:56.787 05:58:03 -- setup/hugepages.sh@41 -- # echo 0 00:02:56.787 05:58:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:56.787 05:58:03 -- setup/hugepages.sh@41 -- # echo 0 00:02:56.787 05:58:03 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:56.787 05:58:03 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:56.787 00:02:56.787 real 0m11.536s 00:02:56.787 user 0m4.490s 00:02:56.787 sys 0m5.942s 00:02:56.787 05:58:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:56.787 05:58:03 -- common/autotest_common.sh@10 -- # set +x 00:02:56.787 ************************************ 00:02:56.787 END TEST hugepages 00:02:56.787 ************************************ 00:02:56.787 05:58:03 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:56.787 05:58:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:56.787 05:58:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:56.787 05:58:03 -- common/autotest_common.sh@10 -- # set +x 00:02:56.787 ************************************ 00:02:56.787 START TEST driver 00:02:56.787 ************************************ 00:02:56.787 05:58:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:56.787 * Looking for test storage... 00:02:56.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:56.787 05:58:03 -- setup/driver.sh@68 -- # setup reset 00:02:56.787 05:58:03 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:56.788 05:58:03 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.318 05:58:05 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:59.318 05:58:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:59.318 05:58:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:59.318 05:58:05 -- common/autotest_common.sh@10 -- # set +x 00:02:59.318 ************************************ 00:02:59.318 START TEST guess_driver 00:02:59.318 ************************************ 00:02:59.318 05:58:05 -- common/autotest_common.sh@1104 -- # guess_driver 00:02:59.318 05:58:05 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:59.318 05:58:05 -- setup/driver.sh@47 -- # local fail=0 00:02:59.318 05:58:05 -- setup/driver.sh@49 -- # pick_driver 00:02:59.318 05:58:05 -- setup/driver.sh@36 -- # vfio 00:02:59.318 05:58:05 -- setup/driver.sh@21 -- # local iommu_grups 00:02:59.318 05:58:05 -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:59.318 05:58:05 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:59.318 05:58:05 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:59.318 05:58:05 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:59.318 05:58:05 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:02:59.318 05:58:05 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:59.318 05:58:05 -- setup/driver.sh@14 -- # mod vfio_pci 00:02:59.318 05:58:05 -- setup/driver.sh@12 -- # dep vfio_pci 00:02:59.318 05:58:05 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:59.318 05:58:05 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:59.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:59.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:59.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:59.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:59.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:59.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:59.318 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:59.318 05:58:05 -- setup/driver.sh@30 -- # return 0 00:02:59.318 05:58:05 -- setup/driver.sh@37 -- # echo vfio-pci 00:02:59.318 05:58:05 -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:59.318 05:58:05 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:59.318 05:58:05 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:59.318 Looking for driver=vfio-pci 00:02:59.318 05:58:05 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:59.318 05:58:05 -- setup/driver.sh@45 -- # setup output config 00:02:59.318 05:58:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.318 05:58:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:00.691 05:58:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.691 05:58:06 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.691 05:58:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.691 05:58:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.691 05:58:06 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.691 05:58:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.691 05:58:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.691 05:58:06 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.691 05:58:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.691 05:58:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.691 05:58:06 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.691 05:58:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.691 05:58:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.691 05:58:06 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.691 05:58:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.691 05:58:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.691 05:58:06 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.691 05:58:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.691 05:58:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.691 05:58:06 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.691 05:58:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.691 05:58:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.691 05:58:06 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.691 05:58:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.691 05:58:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.691 05:58:06 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.691 05:58:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.691 05:58:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.691 05:58:06 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.691 05:58:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.691 05:58:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.691 05:58:06 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.691 05:58:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.691 05:58:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.691 05:58:06 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.691 05:58:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.691 05:58:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.691 05:58:06 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.691 05:58:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.691 05:58:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.691 05:58:06 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.691 05:58:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.691 05:58:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.691 05:58:06 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.691 05:58:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:00.691 05:58:06 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:00.691 05:58:06 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:00.691 05:58:06 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.627 05:58:07 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:01.627 05:58:07 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:01.627 05:58:07 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:01.627 05:58:07 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:01.627 05:58:07 -- setup/driver.sh@65 -- # setup reset 00:03:01.627 05:58:07 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.627 05:58:07 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.157 00:03:04.157 real 0m4.852s 00:03:04.157 user 0m1.150s 00:03:04.157 sys 0m1.837s 00:03:04.157 05:58:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.157 05:58:10 -- common/autotest_common.sh@10 -- # set +x 00:03:04.157 ************************************ 00:03:04.157 END TEST guess_driver 00:03:04.157 ************************************ 00:03:04.157 00:03:04.157 real 0m7.424s 00:03:04.157 user 0m1.717s 00:03:04.157 sys 0m2.856s 00:03:04.157 05:58:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.157 05:58:10 -- common/autotest_common.sh@10 -- # set +x 00:03:04.157 ************************************ 00:03:04.157 END TEST driver 00:03:04.157 ************************************ 00:03:04.157 05:58:10 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:04.157 05:58:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:04.157 05:58:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:04.157 05:58:10 -- common/autotest_common.sh@10 -- # set +x 00:03:04.157 ************************************ 00:03:04.157 START TEST devices 00:03:04.157 ************************************ 00:03:04.157 05:58:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:04.157 * Looking for test storage... 00:03:04.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:04.157 05:58:10 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:04.157 05:58:10 -- setup/devices.sh@192 -- # setup reset 00:03:04.157 05:58:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:04.157 05:58:10 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:05.529 05:58:12 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:05.529 05:58:12 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:05.529 05:58:12 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:05.529 05:58:12 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:05.529 05:58:12 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:05.529 05:58:12 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:05.529 05:58:12 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:05.529 05:58:12 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:05.529 05:58:12 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:05.529 05:58:12 -- setup/devices.sh@196 -- # blocks=() 00:03:05.529 05:58:12 -- setup/devices.sh@196 -- # declare -a blocks 00:03:05.529 05:58:12 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:05.529 05:58:12 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:05.529 05:58:12 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:05.529 05:58:12 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:05.529 05:58:12 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:05.529 05:58:12 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:05.529 05:58:12 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:05.529 05:58:12 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:05.529 05:58:12 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:05.529 05:58:12 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:05.529 05:58:12 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:05.788 No valid GPT data, bailing 00:03:05.788 05:58:12 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:05.788 05:58:12 -- scripts/common.sh@393 -- # pt= 00:03:05.788 05:58:12 -- scripts/common.sh@394 -- # return 1 00:03:05.788 05:58:12 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:05.788 05:58:12 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:05.788 05:58:12 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:05.788 05:58:12 -- setup/common.sh@80 -- # echo 1000204886016 00:03:05.788 05:58:12 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:05.788 05:58:12 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:05.788 05:58:12 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:05.788 05:58:12 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:05.788 05:58:12 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:05.788 05:58:12 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:05.788 05:58:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:05.788 05:58:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:05.788 05:58:12 -- common/autotest_common.sh@10 -- # set +x 00:03:05.788 ************************************ 00:03:05.788 START TEST nvme_mount 00:03:05.788 ************************************ 00:03:05.788 05:58:12 -- common/autotest_common.sh@1104 -- # nvme_mount 00:03:05.788 05:58:12 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:05.788 05:58:12 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:05.788 05:58:12 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:05.788 05:58:12 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:05.788 05:58:12 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:05.788 05:58:12 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:05.788 05:58:12 -- setup/common.sh@40 -- # local part_no=1 00:03:05.788 05:58:12 -- setup/common.sh@41 -- # local size=1073741824 00:03:05.788 05:58:12 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:05.788 05:58:12 -- setup/common.sh@44 -- # parts=() 00:03:05.788 05:58:12 -- setup/common.sh@44 -- # local parts 00:03:05.788 05:58:12 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:05.788 05:58:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:05.788 05:58:12 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:05.788 05:58:12 -- setup/common.sh@46 -- # (( part++ )) 00:03:05.788 05:58:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:05.788 05:58:12 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:05.788 05:58:12 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:05.788 05:58:12 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:06.724 Creating new GPT entries in memory. 00:03:06.724 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:06.724 other utilities. 00:03:06.724 05:58:13 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:06.724 05:58:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:06.724 05:58:13 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:06.724 05:58:13 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:06.724 05:58:13 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:07.657 Creating new GPT entries in memory. 00:03:07.657 The operation has completed successfully. 00:03:07.657 05:58:14 -- setup/common.sh@57 -- # (( part++ )) 00:03:07.657 05:58:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:07.657 05:58:14 -- setup/common.sh@62 -- # wait 982600 00:03:07.657 05:58:14 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:07.657 05:58:14 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:07.657 05:58:14 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:07.657 05:58:14 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:07.657 05:58:14 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:07.915 05:58:14 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:07.915 05:58:14 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:07.915 05:58:14 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:07.915 05:58:14 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:07.915 05:58:14 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:07.915 05:58:14 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:07.915 05:58:14 -- setup/devices.sh@53 -- # local found=0 00:03:07.915 05:58:14 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:07.915 05:58:14 -- setup/devices.sh@56 -- # : 00:03:07.915 05:58:14 -- setup/devices.sh@59 -- # local pci status 00:03:07.915 05:58:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.915 05:58:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:07.915 05:58:14 -- setup/devices.sh@47 -- # setup output config 00:03:07.915 05:58:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.915 05:58:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:08.849 05:58:15 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.849 05:58:15 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:08.849 05:58:15 -- setup/devices.sh@63 -- # found=1 00:03:08.849 05:58:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.849 05:58:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.849 05:58:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.849 05:58:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.849 05:58:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.849 05:58:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.849 05:58:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.849 05:58:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.849 05:58:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.849 05:58:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.849 05:58:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.849 05:58:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.849 05:58:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.849 05:58:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.849 05:58:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.849 05:58:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.849 05:58:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.849 05:58:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.849 05:58:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.849 05:58:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.849 05:58:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.849 05:58:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.849 05:58:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.849 05:58:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.849 05:58:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.849 05:58:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.849 05:58:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.849 05:58:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.849 05:58:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.849 05:58:15 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.849 05:58:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:08.849 05:58:15 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:08.849 05:58:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.107 05:58:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:09.107 05:58:15 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:09.107 05:58:15 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.107 05:58:15 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:09.107 05:58:15 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:09.107 05:58:15 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:09.107 05:58:15 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.107 05:58:15 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.107 05:58:15 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:09.107 05:58:15 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:09.107 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:09.107 05:58:15 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:09.107 05:58:15 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:09.366 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:09.366 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:09.366 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:09.366 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:09.366 05:58:15 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:09.366 05:58:15 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:09.366 05:58:15 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.366 05:58:15 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:09.366 05:58:15 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:09.366 05:58:15 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.624 05:58:15 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:09.624 05:58:15 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:09.624 05:58:15 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:09.624 05:58:15 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.624 05:58:15 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:09.624 05:58:15 -- setup/devices.sh@53 -- # local found=0 00:03:09.624 05:58:15 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:09.624 05:58:15 -- setup/devices.sh@56 -- # : 00:03:09.624 05:58:15 -- setup/devices.sh@59 -- # local pci status 00:03:09.624 05:58:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.624 05:58:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:09.624 05:58:15 -- setup/devices.sh@47 -- # setup output config 00:03:09.624 05:58:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.624 05:58:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:10.563 05:58:16 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.563 05:58:16 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:10.563 05:58:16 -- setup/devices.sh@63 -- # found=1 00:03:10.563 05:58:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.563 05:58:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.563 05:58:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.563 05:58:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.563 05:58:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.563 05:58:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.563 05:58:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.563 05:58:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.563 05:58:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.563 05:58:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.563 05:58:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.563 05:58:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.563 05:58:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.563 05:58:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.563 05:58:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.563 05:58:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.563 05:58:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.563 05:58:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.563 05:58:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.563 05:58:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.563 05:58:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.563 05:58:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.563 05:58:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.563 05:58:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.563 05:58:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.563 05:58:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.563 05:58:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.563 05:58:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.563 05:58:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.563 05:58:16 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.563 05:58:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.563 05:58:16 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:10.563 05:58:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.821 05:58:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:10.821 05:58:17 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:10.821 05:58:17 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.821 05:58:17 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:10.821 05:58:17 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:10.821 05:58:17 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:10.821 05:58:17 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:10.821 05:58:17 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:10.821 05:58:17 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:10.821 05:58:17 -- setup/devices.sh@50 -- # local mount_point= 00:03:10.821 05:58:17 -- setup/devices.sh@51 -- # local test_file= 00:03:10.821 05:58:17 -- setup/devices.sh@53 -- # local found=0 00:03:10.821 05:58:17 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:10.821 05:58:17 -- setup/devices.sh@59 -- # local pci status 00:03:10.821 05:58:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.821 05:58:17 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:10.821 05:58:17 -- setup/devices.sh@47 -- # setup output config 00:03:10.821 05:58:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.821 05:58:17 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:12.197 05:58:18 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.197 05:58:18 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:12.197 05:58:18 -- setup/devices.sh@63 -- # found=1 00:03:12.197 05:58:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.197 05:58:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.197 05:58:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.197 05:58:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.197 05:58:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.197 05:58:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.197 05:58:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.197 05:58:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.197 05:58:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.197 05:58:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.197 05:58:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.197 05:58:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.197 05:58:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.197 05:58:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.197 05:58:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.197 05:58:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.197 05:58:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.197 05:58:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.197 05:58:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.197 05:58:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.197 05:58:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.197 05:58:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.197 05:58:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.197 05:58:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.197 05:58:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.197 05:58:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.197 05:58:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.197 05:58:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.197 05:58:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.197 05:58:18 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.197 05:58:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.197 05:58:18 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:12.197 05:58:18 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.197 05:58:18 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:12.197 05:58:18 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:12.197 05:58:18 -- setup/devices.sh@68 -- # return 0 00:03:12.198 05:58:18 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:12.198 05:58:18 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:12.198 05:58:18 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:12.198 05:58:18 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:12.198 05:58:18 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:12.198 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:12.198 00:03:12.198 real 0m6.467s 00:03:12.198 user 0m1.558s 00:03:12.198 sys 0m2.497s 00:03:12.198 05:58:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.198 05:58:18 -- common/autotest_common.sh@10 -- # set +x 00:03:12.198 ************************************ 00:03:12.198 END TEST nvme_mount 00:03:12.198 ************************************ 00:03:12.198 05:58:18 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:12.198 05:58:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:12.198 05:58:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:12.198 05:58:18 -- common/autotest_common.sh@10 -- # set +x 00:03:12.198 ************************************ 00:03:12.198 START TEST dm_mount 00:03:12.198 ************************************ 00:03:12.198 05:58:18 -- common/autotest_common.sh@1104 -- # dm_mount 00:03:12.198 05:58:18 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:12.198 05:58:18 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:12.198 05:58:18 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:12.198 05:58:18 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:12.198 05:58:18 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:12.198 05:58:18 -- setup/common.sh@40 -- # local part_no=2 00:03:12.198 05:58:18 -- setup/common.sh@41 -- # local size=1073741824 00:03:12.198 05:58:18 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:12.198 05:58:18 -- setup/common.sh@44 -- # parts=() 00:03:12.198 05:58:18 -- setup/common.sh@44 -- # local parts 00:03:12.198 05:58:18 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:12.198 05:58:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:12.198 05:58:18 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:12.198 05:58:18 -- setup/common.sh@46 -- # (( part++ )) 00:03:12.198 05:58:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:12.198 05:58:18 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:12.198 05:58:18 -- setup/common.sh@46 -- # (( part++ )) 00:03:12.198 05:58:18 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:12.198 05:58:18 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:12.198 05:58:18 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:12.198 05:58:18 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:13.134 Creating new GPT entries in memory. 00:03:13.134 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:13.134 other utilities. 00:03:13.134 05:58:19 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:13.134 05:58:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:13.134 05:58:19 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:13.134 05:58:19 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:13.134 05:58:19 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:14.511 Creating new GPT entries in memory. 00:03:14.511 The operation has completed successfully. 00:03:14.511 05:58:20 -- setup/common.sh@57 -- # (( part++ )) 00:03:14.511 05:58:20 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:14.511 05:58:20 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:14.511 05:58:20 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:14.511 05:58:20 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:15.448 The operation has completed successfully. 00:03:15.448 05:58:21 -- setup/common.sh@57 -- # (( part++ )) 00:03:15.448 05:58:21 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:15.448 05:58:21 -- setup/common.sh@62 -- # wait 985055 00:03:15.448 05:58:21 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:15.448 05:58:21 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:15.448 05:58:21 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:15.448 05:58:21 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:15.448 05:58:21 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:15.448 05:58:21 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:15.448 05:58:21 -- setup/devices.sh@161 -- # break 00:03:15.448 05:58:21 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:15.448 05:58:21 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:15.448 05:58:21 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:15.448 05:58:21 -- setup/devices.sh@166 -- # dm=dm-0 00:03:15.448 05:58:21 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:15.448 05:58:21 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:15.448 05:58:21 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:15.448 05:58:21 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:15.449 05:58:21 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:15.449 05:58:21 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:15.449 05:58:21 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:15.449 05:58:21 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:15.449 05:58:21 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:15.449 05:58:21 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:15.449 05:58:21 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:15.449 05:58:21 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:15.449 05:58:21 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:15.449 05:58:21 -- setup/devices.sh@53 -- # local found=0 00:03:15.449 05:58:21 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:15.449 05:58:21 -- setup/devices.sh@56 -- # : 00:03:15.449 05:58:21 -- setup/devices.sh@59 -- # local pci status 00:03:15.449 05:58:21 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.449 05:58:21 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:15.449 05:58:21 -- setup/devices.sh@47 -- # setup output config 00:03:15.449 05:58:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.449 05:58:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:16.384 05:58:22 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.384 05:58:22 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:16.384 05:58:22 -- setup/devices.sh@63 -- # found=1 00:03:16.384 05:58:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.384 05:58:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.384 05:58:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.384 05:58:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.384 05:58:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.384 05:58:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.384 05:58:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.384 05:58:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.384 05:58:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.384 05:58:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.384 05:58:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.384 05:58:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.384 05:58:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.384 05:58:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.384 05:58:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.384 05:58:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.384 05:58:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.384 05:58:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.384 05:58:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.384 05:58:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.384 05:58:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.384 05:58:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.384 05:58:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.384 05:58:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.384 05:58:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.384 05:58:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.384 05:58:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.384 05:58:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.384 05:58:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.384 05:58:22 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.384 05:58:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.384 05:58:22 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:16.384 05:58:22 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.641 05:58:22 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:16.641 05:58:22 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:16.641 05:58:22 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:16.641 05:58:22 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:16.641 05:58:22 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:16.641 05:58:22 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:16.641 05:58:23 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:16.641 05:58:23 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:16.641 05:58:23 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:16.641 05:58:23 -- setup/devices.sh@50 -- # local mount_point= 00:03:16.641 05:58:23 -- setup/devices.sh@51 -- # local test_file= 00:03:16.641 05:58:23 -- setup/devices.sh@53 -- # local found=0 00:03:16.641 05:58:23 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:16.641 05:58:23 -- setup/devices.sh@59 -- # local pci status 00:03:16.641 05:58:23 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:16.641 05:58:23 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:16.641 05:58:23 -- setup/devices.sh@47 -- # setup output config 00:03:16.641 05:58:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.641 05:58:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:17.595 05:58:24 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.596 05:58:24 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:17.596 05:58:24 -- setup/devices.sh@63 -- # found=1 00:03:17.596 05:58:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.596 05:58:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.596 05:58:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.596 05:58:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.596 05:58:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.596 05:58:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.596 05:58:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.596 05:58:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.596 05:58:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.596 05:58:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.596 05:58:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.596 05:58:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.596 05:58:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.596 05:58:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.596 05:58:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.596 05:58:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.596 05:58:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.596 05:58:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.596 05:58:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.596 05:58:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.596 05:58:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.855 05:58:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.855 05:58:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.855 05:58:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.855 05:58:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.855 05:58:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.855 05:58:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.855 05:58:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.855 05:58:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.855 05:58:24 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.855 05:58:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.855 05:58:24 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:17.855 05:58:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:17.855 05:58:24 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:17.855 05:58:24 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:17.855 05:58:24 -- setup/devices.sh@68 -- # return 0 00:03:17.855 05:58:24 -- setup/devices.sh@187 -- # cleanup_dm 00:03:17.855 05:58:24 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:17.855 05:58:24 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:17.855 05:58:24 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:17.855 05:58:24 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:17.855 05:58:24 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:17.855 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:17.855 05:58:24 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:17.855 05:58:24 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:17.855 00:03:17.855 real 0m5.775s 00:03:17.855 user 0m0.977s 00:03:17.855 sys 0m1.656s 00:03:17.855 05:58:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:17.855 05:58:24 -- common/autotest_common.sh@10 -- # set +x 00:03:17.855 ************************************ 00:03:17.855 END TEST dm_mount 00:03:17.855 ************************************ 00:03:18.115 05:58:24 -- setup/devices.sh@1 -- # cleanup 00:03:18.115 05:58:24 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:18.115 05:58:24 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:18.115 05:58:24 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:18.115 05:58:24 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:18.115 05:58:24 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:18.115 05:58:24 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:18.374 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:18.374 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:18.374 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:18.374 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:18.374 05:58:24 -- setup/devices.sh@12 -- # cleanup_dm 00:03:18.374 05:58:24 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:18.374 05:58:24 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:18.374 05:58:24 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:18.374 05:58:24 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:18.374 05:58:24 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:18.374 05:58:24 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:18.374 00:03:18.374 real 0m14.137s 00:03:18.374 user 0m3.203s 00:03:18.374 sys 0m5.141s 00:03:18.374 05:58:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.374 05:58:24 -- common/autotest_common.sh@10 -- # set +x 00:03:18.374 ************************************ 00:03:18.374 END TEST devices 00:03:18.374 ************************************ 00:03:18.374 00:03:18.374 real 0m43.513s 00:03:18.374 user 0m12.693s 00:03:18.374 sys 0m19.130s 00:03:18.374 05:58:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.374 05:58:24 -- common/autotest_common.sh@10 -- # set +x 00:03:18.374 ************************************ 00:03:18.374 END TEST setup.sh 00:03:18.374 ************************************ 00:03:18.374 05:58:24 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:19.310 Hugepages 00:03:19.310 node hugesize free / total 00:03:19.310 node0 1048576kB 0 / 0 00:03:19.310 node0 2048kB 2048 / 2048 00:03:19.310 node1 1048576kB 0 / 0 00:03:19.310 node1 2048kB 0 / 0 00:03:19.310 00:03:19.310 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:19.310 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:19.310 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:19.310 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:19.310 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:19.310 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:19.310 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:19.310 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:19.310 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:19.310 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:19.310 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:19.310 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:19.310 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:19.310 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:19.310 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:19.310 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:19.310 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:19.568 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:19.568 05:58:25 -- spdk/autotest.sh@141 -- # uname -s 00:03:19.568 05:58:25 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:03:19.568 05:58:25 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:03:19.568 05:58:25 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.945 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:20.945 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:20.945 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:20.945 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:20.945 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:20.945 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:20.945 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:20.945 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:20.945 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:20.945 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:20.945 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:20.945 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:20.945 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:20.945 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:20.945 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:20.945 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:21.513 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:21.772 05:58:28 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:22.709 05:58:29 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:22.709 05:58:29 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:22.709 05:58:29 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:03:22.709 05:58:29 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:03:22.709 05:58:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:22.709 05:58:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:22.709 05:58:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:22.709 05:58:29 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:22.709 05:58:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:22.968 05:58:29 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:22.968 05:58:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:22.968 05:58:29 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:23.948 Waiting for block devices as requested 00:03:23.948 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:24.210 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:24.210 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:24.210 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:24.469 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:24.469 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:24.469 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:24.469 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:24.727 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:24.727 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:24.727 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:24.727 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:24.984 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:24.984 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:24.984 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:24.984 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:25.242 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:25.242 05:58:31 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:03:25.242 05:58:31 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:25.242 05:58:31 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:25.242 05:58:31 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:03:25.242 05:58:31 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:25.242 05:58:31 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:25.242 05:58:31 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:25.242 05:58:31 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:25.242 05:58:31 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:03:25.242 05:58:31 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:03:25.242 05:58:31 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:03:25.242 05:58:31 -- common/autotest_common.sh@1530 -- # grep oacs 00:03:25.242 05:58:31 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:03:25.242 05:58:31 -- common/autotest_common.sh@1530 -- # oacs=' 0xf' 00:03:25.242 05:58:31 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:03:25.243 05:58:31 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:03:25.243 05:58:31 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:03:25.243 05:58:31 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:03:25.243 05:58:31 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:03:25.243 05:58:31 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:03:25.243 05:58:31 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:03:25.243 05:58:31 -- common/autotest_common.sh@1542 -- # continue 00:03:25.243 05:58:31 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:03:25.243 05:58:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:25.243 05:58:31 -- common/autotest_common.sh@10 -- # set +x 00:03:25.243 05:58:31 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:03:25.243 05:58:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:25.243 05:58:31 -- common/autotest_common.sh@10 -- # set +x 00:03:25.243 05:58:31 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.616 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:26.616 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:26.616 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:26.616 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:26.616 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:26.616 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:26.616 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:26.616 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:26.616 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:26.616 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:26.616 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:26.616 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:26.616 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:26.616 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:26.616 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:26.616 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:27.549 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:27.549 05:58:33 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:03:27.549 05:58:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:27.549 05:58:33 -- common/autotest_common.sh@10 -- # set +x 00:03:27.549 05:58:33 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:03:27.549 05:58:33 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:27.549 05:58:33 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:27.549 05:58:33 -- common/autotest_common.sh@1562 -- # bdfs=() 00:03:27.549 05:58:33 -- common/autotest_common.sh@1562 -- # local bdfs 00:03:27.549 05:58:33 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:27.549 05:58:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:27.549 05:58:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:27.549 05:58:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:27.549 05:58:33 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:27.549 05:58:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:27.549 05:58:34 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:27.549 05:58:34 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:27.549 05:58:34 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:03:27.549 05:58:34 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:27.549 05:58:34 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:03:27.549 05:58:34 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:27.549 05:58:34 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:03:27.549 05:58:34 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:03:27.549 05:58:34 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:03:27.549 05:58:34 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=990369 00:03:27.549 05:58:34 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:27.549 05:58:34 -- common/autotest_common.sh@1583 -- # waitforlisten 990369 00:03:27.549 05:58:34 -- common/autotest_common.sh@819 -- # '[' -z 990369 ']' 00:03:27.549 05:58:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:27.549 05:58:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:27.549 05:58:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:27.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:27.549 05:58:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:27.549 05:58:34 -- common/autotest_common.sh@10 -- # set +x 00:03:27.806 [2024-07-13 05:58:34.066519] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:27.806 [2024-07-13 05:58:34.066596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid990369 ] 00:03:27.806 EAL: No free 2048 kB hugepages reported on node 1 00:03:27.806 [2024-07-13 05:58:34.124018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:27.806 [2024-07-13 05:58:34.231032] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:27.806 [2024-07-13 05:58:34.231183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:28.734 05:58:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:28.734 05:58:35 -- common/autotest_common.sh@852 -- # return 0 00:03:28.734 05:58:35 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:03:28.734 05:58:35 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:28.734 05:58:35 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:32.009 nvme0n1 00:03:32.009 05:58:38 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:32.009 [2024-07-13 05:58:38.345947] nvme_opal.c:2059:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:32.009 [2024-07-13 05:58:38.345987] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:32.009 request: 00:03:32.009 { 00:03:32.009 "nvme_ctrlr_name": "nvme0", 00:03:32.009 "password": "test", 00:03:32.009 "method": "bdev_nvme_opal_revert", 00:03:32.009 "req_id": 1 00:03:32.009 } 00:03:32.009 Got JSON-RPC error response 00:03:32.009 response: 00:03:32.009 { 00:03:32.009 "code": -32603, 00:03:32.009 "message": "Internal error" 00:03:32.009 } 00:03:32.009 05:58:38 -- common/autotest_common.sh@1589 -- # true 00:03:32.009 05:58:38 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:03:32.009 05:58:38 -- common/autotest_common.sh@1593 -- # killprocess 990369 00:03:32.009 05:58:38 -- common/autotest_common.sh@926 -- # '[' -z 990369 ']' 00:03:32.009 05:58:38 -- common/autotest_common.sh@930 -- # kill -0 990369 00:03:32.009 05:58:38 -- common/autotest_common.sh@931 -- # uname 00:03:32.009 05:58:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:03:32.009 05:58:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 990369 00:03:32.009 05:58:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:03:32.009 05:58:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:03:32.009 05:58:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 990369' 00:03:32.009 killing process with pid 990369 00:03:32.009 05:58:38 -- common/autotest_common.sh@945 -- # kill 990369 00:03:32.009 05:58:38 -- common/autotest_common.sh@950 -- # wait 990369 00:03:33.927 05:58:40 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:03:33.928 05:58:40 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:03:33.928 05:58:40 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:33.928 05:58:40 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:33.928 05:58:40 -- spdk/autotest.sh@173 -- # timing_enter lib 00:03:33.928 05:58:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:33.928 05:58:40 -- common/autotest_common.sh@10 -- # set +x 00:03:33.928 05:58:40 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:33.928 05:58:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:33.928 05:58:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:33.928 05:58:40 -- common/autotest_common.sh@10 -- # set +x 00:03:33.928 ************************************ 00:03:33.928 START TEST env 00:03:33.928 ************************************ 00:03:33.928 05:58:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:33.928 * Looking for test storage... 00:03:33.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:33.928 05:58:40 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:33.928 05:58:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:33.928 05:58:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:33.928 05:58:40 -- common/autotest_common.sh@10 -- # set +x 00:03:33.928 ************************************ 00:03:33.928 START TEST env_memory 00:03:33.928 ************************************ 00:03:33.928 05:58:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:33.928 00:03:33.928 00:03:33.928 CUnit - A unit testing framework for C - Version 2.1-3 00:03:33.928 http://cunit.sourceforge.net/ 00:03:33.928 00:03:33.928 00:03:33.928 Suite: memory 00:03:33.928 Test: alloc and free memory map ...[2024-07-13 05:58:40.278332] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:33.928 passed 00:03:33.928 Test: mem map translation ...[2024-07-13 05:58:40.299610] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:33.928 [2024-07-13 05:58:40.299635] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:33.928 [2024-07-13 05:58:40.299679] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:33.928 [2024-07-13 05:58:40.299692] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:33.928 passed 00:03:33.928 Test: mem map registration ...[2024-07-13 05:58:40.344272] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:33.928 [2024-07-13 05:58:40.344293] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:33.928 passed 00:03:33.928 Test: mem map adjacent registrations ...passed 00:03:33.928 00:03:33.928 Run Summary: Type Total Ran Passed Failed Inactive 00:03:33.928 suites 1 1 n/a 0 0 00:03:33.928 tests 4 4 4 0 0 00:03:33.928 asserts 152 152 152 0 n/a 00:03:33.928 00:03:33.928 Elapsed time = 0.148 seconds 00:03:33.928 00:03:33.928 real 0m0.155s 00:03:33.928 user 0m0.145s 00:03:33.928 sys 0m0.009s 00:03:33.928 05:58:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.928 05:58:40 -- common/autotest_common.sh@10 -- # set +x 00:03:33.928 ************************************ 00:03:33.928 END TEST env_memory 00:03:33.928 ************************************ 00:03:33.928 05:58:40 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:33.928 05:58:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:33.928 05:58:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:33.928 05:58:40 -- common/autotest_common.sh@10 -- # set +x 00:03:33.928 ************************************ 00:03:33.928 START TEST env_vtophys 00:03:33.928 ************************************ 00:03:33.928 05:58:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:34.188 EAL: lib.eal log level changed from notice to debug 00:03:34.188 EAL: Detected lcore 0 as core 0 on socket 0 00:03:34.188 EAL: Detected lcore 1 as core 1 on socket 0 00:03:34.188 EAL: Detected lcore 2 as core 2 on socket 0 00:03:34.188 EAL: Detected lcore 3 as core 3 on socket 0 00:03:34.188 EAL: Detected lcore 4 as core 4 on socket 0 00:03:34.188 EAL: Detected lcore 5 as core 5 on socket 0 00:03:34.188 EAL: Detected lcore 6 as core 8 on socket 0 00:03:34.188 EAL: Detected lcore 7 as core 9 on socket 0 00:03:34.188 EAL: Detected lcore 8 as core 10 on socket 0 00:03:34.188 EAL: Detected lcore 9 as core 11 on socket 0 00:03:34.188 EAL: Detected lcore 10 as core 12 on socket 0 00:03:34.188 EAL: Detected lcore 11 as core 13 on socket 0 00:03:34.188 EAL: Detected lcore 12 as core 0 on socket 1 00:03:34.188 EAL: Detected lcore 13 as core 1 on socket 1 00:03:34.188 EAL: Detected lcore 14 as core 2 on socket 1 00:03:34.188 EAL: Detected lcore 15 as core 3 on socket 1 00:03:34.188 EAL: Detected lcore 16 as core 4 on socket 1 00:03:34.188 EAL: Detected lcore 17 as core 5 on socket 1 00:03:34.188 EAL: Detected lcore 18 as core 8 on socket 1 00:03:34.188 EAL: Detected lcore 19 as core 9 on socket 1 00:03:34.188 EAL: Detected lcore 20 as core 10 on socket 1 00:03:34.188 EAL: Detected lcore 21 as core 11 on socket 1 00:03:34.188 EAL: Detected lcore 22 as core 12 on socket 1 00:03:34.188 EAL: Detected lcore 23 as core 13 on socket 1 00:03:34.188 EAL: Detected lcore 24 as core 0 on socket 0 00:03:34.188 EAL: Detected lcore 25 as core 1 on socket 0 00:03:34.188 EAL: Detected lcore 26 as core 2 on socket 0 00:03:34.188 EAL: Detected lcore 27 as core 3 on socket 0 00:03:34.188 EAL: Detected lcore 28 as core 4 on socket 0 00:03:34.188 EAL: Detected lcore 29 as core 5 on socket 0 00:03:34.188 EAL: Detected lcore 30 as core 8 on socket 0 00:03:34.188 EAL: Detected lcore 31 as core 9 on socket 0 00:03:34.188 EAL: Detected lcore 32 as core 10 on socket 0 00:03:34.188 EAL: Detected lcore 33 as core 11 on socket 0 00:03:34.188 EAL: Detected lcore 34 as core 12 on socket 0 00:03:34.188 EAL: Detected lcore 35 as core 13 on socket 0 00:03:34.188 EAL: Detected lcore 36 as core 0 on socket 1 00:03:34.188 EAL: Detected lcore 37 as core 1 on socket 1 00:03:34.188 EAL: Detected lcore 38 as core 2 on socket 1 00:03:34.188 EAL: Detected lcore 39 as core 3 on socket 1 00:03:34.188 EAL: Detected lcore 40 as core 4 on socket 1 00:03:34.188 EAL: Detected lcore 41 as core 5 on socket 1 00:03:34.188 EAL: Detected lcore 42 as core 8 on socket 1 00:03:34.188 EAL: Detected lcore 43 as core 9 on socket 1 00:03:34.188 EAL: Detected lcore 44 as core 10 on socket 1 00:03:34.188 EAL: Detected lcore 45 as core 11 on socket 1 00:03:34.188 EAL: Detected lcore 46 as core 12 on socket 1 00:03:34.188 EAL: Detected lcore 47 as core 13 on socket 1 00:03:34.188 EAL: Maximum logical cores by configuration: 128 00:03:34.188 EAL: Detected CPU lcores: 48 00:03:34.188 EAL: Detected NUMA nodes: 2 00:03:34.188 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:34.188 EAL: Detected shared linkage of DPDK 00:03:34.188 EAL: No shared files mode enabled, IPC will be disabled 00:03:34.188 EAL: Bus pci wants IOVA as 'DC' 00:03:34.188 EAL: Buses did not request a specific IOVA mode. 00:03:34.188 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:34.188 EAL: Selected IOVA mode 'VA' 00:03:34.188 EAL: No free 2048 kB hugepages reported on node 1 00:03:34.188 EAL: Probing VFIO support... 00:03:34.188 EAL: IOMMU type 1 (Type 1) is supported 00:03:34.188 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:34.188 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:34.188 EAL: VFIO support initialized 00:03:34.188 EAL: Ask a virtual area of 0x2e000 bytes 00:03:34.188 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:34.188 EAL: Setting up physically contiguous memory... 00:03:34.188 EAL: Setting maximum number of open files to 524288 00:03:34.188 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:34.188 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:34.188 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:34.188 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.188 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:34.188 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.188 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.188 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:34.188 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:34.188 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.188 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:34.188 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.188 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.188 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:34.188 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:34.188 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.188 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:34.188 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.188 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.188 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:34.188 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:34.188 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.188 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:34.188 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:34.188 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.188 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:34.188 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:34.188 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:34.188 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.188 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:34.188 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.188 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.188 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:34.188 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:34.188 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.188 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:34.188 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.188 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.188 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:34.188 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:34.188 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.188 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:34.188 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.188 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.188 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:34.188 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:34.188 EAL: Ask a virtual area of 0x61000 bytes 00:03:34.188 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:34.188 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:34.188 EAL: Ask a virtual area of 0x400000000 bytes 00:03:34.188 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:34.188 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:34.188 EAL: Hugepages will be freed exactly as allocated. 00:03:34.188 EAL: No shared files mode enabled, IPC is disabled 00:03:34.188 EAL: No shared files mode enabled, IPC is disabled 00:03:34.188 EAL: TSC frequency is ~2700000 KHz 00:03:34.188 EAL: Main lcore 0 is ready (tid=7fb8664dba00;cpuset=[0]) 00:03:34.188 EAL: Trying to obtain current memory policy. 00:03:34.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.188 EAL: Restoring previous memory policy: 0 00:03:34.188 EAL: request: mp_malloc_sync 00:03:34.188 EAL: No shared files mode enabled, IPC is disabled 00:03:34.188 EAL: Heap on socket 0 was expanded by 2MB 00:03:34.188 EAL: No shared files mode enabled, IPC is disabled 00:03:34.188 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:34.188 EAL: Mem event callback 'spdk:(nil)' registered 00:03:34.188 00:03:34.188 00:03:34.188 CUnit - A unit testing framework for C - Version 2.1-3 00:03:34.188 http://cunit.sourceforge.net/ 00:03:34.188 00:03:34.188 00:03:34.188 Suite: components_suite 00:03:34.188 Test: vtophys_malloc_test ...passed 00:03:34.188 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:34.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.188 EAL: Restoring previous memory policy: 4 00:03:34.188 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.188 EAL: request: mp_malloc_sync 00:03:34.188 EAL: No shared files mode enabled, IPC is disabled 00:03:34.188 EAL: Heap on socket 0 was expanded by 4MB 00:03:34.188 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.188 EAL: request: mp_malloc_sync 00:03:34.188 EAL: No shared files mode enabled, IPC is disabled 00:03:34.188 EAL: Heap on socket 0 was shrunk by 4MB 00:03:34.188 EAL: Trying to obtain current memory policy. 00:03:34.188 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.188 EAL: Restoring previous memory policy: 4 00:03:34.188 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.188 EAL: request: mp_malloc_sync 00:03:34.188 EAL: No shared files mode enabled, IPC is disabled 00:03:34.188 EAL: Heap on socket 0 was expanded by 6MB 00:03:34.188 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.188 EAL: request: mp_malloc_sync 00:03:34.188 EAL: No shared files mode enabled, IPC is disabled 00:03:34.188 EAL: Heap on socket 0 was shrunk by 6MB 00:03:34.188 EAL: Trying to obtain current memory policy. 00:03:34.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.189 EAL: Restoring previous memory policy: 4 00:03:34.189 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.189 EAL: request: mp_malloc_sync 00:03:34.189 EAL: No shared files mode enabled, IPC is disabled 00:03:34.189 EAL: Heap on socket 0 was expanded by 10MB 00:03:34.189 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.189 EAL: request: mp_malloc_sync 00:03:34.189 EAL: No shared files mode enabled, IPC is disabled 00:03:34.189 EAL: Heap on socket 0 was shrunk by 10MB 00:03:34.189 EAL: Trying to obtain current memory policy. 00:03:34.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.189 EAL: Restoring previous memory policy: 4 00:03:34.189 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.189 EAL: request: mp_malloc_sync 00:03:34.189 EAL: No shared files mode enabled, IPC is disabled 00:03:34.189 EAL: Heap on socket 0 was expanded by 18MB 00:03:34.189 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.189 EAL: request: mp_malloc_sync 00:03:34.189 EAL: No shared files mode enabled, IPC is disabled 00:03:34.189 EAL: Heap on socket 0 was shrunk by 18MB 00:03:34.189 EAL: Trying to obtain current memory policy. 00:03:34.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.189 EAL: Restoring previous memory policy: 4 00:03:34.189 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.189 EAL: request: mp_malloc_sync 00:03:34.189 EAL: No shared files mode enabled, IPC is disabled 00:03:34.189 EAL: Heap on socket 0 was expanded by 34MB 00:03:34.189 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.189 EAL: request: mp_malloc_sync 00:03:34.189 EAL: No shared files mode enabled, IPC is disabled 00:03:34.189 EAL: Heap on socket 0 was shrunk by 34MB 00:03:34.189 EAL: Trying to obtain current memory policy. 00:03:34.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.189 EAL: Restoring previous memory policy: 4 00:03:34.189 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.189 EAL: request: mp_malloc_sync 00:03:34.189 EAL: No shared files mode enabled, IPC is disabled 00:03:34.189 EAL: Heap on socket 0 was expanded by 66MB 00:03:34.189 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.189 EAL: request: mp_malloc_sync 00:03:34.189 EAL: No shared files mode enabled, IPC is disabled 00:03:34.189 EAL: Heap on socket 0 was shrunk by 66MB 00:03:34.189 EAL: Trying to obtain current memory policy. 00:03:34.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.189 EAL: Restoring previous memory policy: 4 00:03:34.189 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.189 EAL: request: mp_malloc_sync 00:03:34.189 EAL: No shared files mode enabled, IPC is disabled 00:03:34.189 EAL: Heap on socket 0 was expanded by 130MB 00:03:34.189 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.189 EAL: request: mp_malloc_sync 00:03:34.189 EAL: No shared files mode enabled, IPC is disabled 00:03:34.189 EAL: Heap on socket 0 was shrunk by 130MB 00:03:34.189 EAL: Trying to obtain current memory policy. 00:03:34.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.447 EAL: Restoring previous memory policy: 4 00:03:34.447 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.447 EAL: request: mp_malloc_sync 00:03:34.447 EAL: No shared files mode enabled, IPC is disabled 00:03:34.447 EAL: Heap on socket 0 was expanded by 258MB 00:03:34.447 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.447 EAL: request: mp_malloc_sync 00:03:34.447 EAL: No shared files mode enabled, IPC is disabled 00:03:34.447 EAL: Heap on socket 0 was shrunk by 258MB 00:03:34.447 EAL: Trying to obtain current memory policy. 00:03:34.447 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:34.706 EAL: Restoring previous memory policy: 4 00:03:34.706 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.706 EAL: request: mp_malloc_sync 00:03:34.706 EAL: No shared files mode enabled, IPC is disabled 00:03:34.706 EAL: Heap on socket 0 was expanded by 514MB 00:03:34.706 EAL: Calling mem event callback 'spdk:(nil)' 00:03:34.706 EAL: request: mp_malloc_sync 00:03:34.706 EAL: No shared files mode enabled, IPC is disabled 00:03:34.706 EAL: Heap on socket 0 was shrunk by 514MB 00:03:34.706 EAL: Trying to obtain current memory policy. 00:03:34.706 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:35.273 EAL: Restoring previous memory policy: 4 00:03:35.273 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.273 EAL: request: mp_malloc_sync 00:03:35.273 EAL: No shared files mode enabled, IPC is disabled 00:03:35.273 EAL: Heap on socket 0 was expanded by 1026MB 00:03:35.273 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.532 EAL: request: mp_malloc_sync 00:03:35.532 EAL: No shared files mode enabled, IPC is disabled 00:03:35.532 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:35.532 passed 00:03:35.532 00:03:35.532 Run Summary: Type Total Ran Passed Failed Inactive 00:03:35.532 suites 1 1 n/a 0 0 00:03:35.532 tests 2 2 2 0 0 00:03:35.532 asserts 497 497 497 0 n/a 00:03:35.532 00:03:35.532 Elapsed time = 1.359 seconds 00:03:35.532 EAL: Calling mem event callback 'spdk:(nil)' 00:03:35.532 EAL: request: mp_malloc_sync 00:03:35.532 EAL: No shared files mode enabled, IPC is disabled 00:03:35.532 EAL: Heap on socket 0 was shrunk by 2MB 00:03:35.532 EAL: No shared files mode enabled, IPC is disabled 00:03:35.532 EAL: No shared files mode enabled, IPC is disabled 00:03:35.532 EAL: No shared files mode enabled, IPC is disabled 00:03:35.532 00:03:35.532 real 0m1.479s 00:03:35.532 user 0m0.849s 00:03:35.532 sys 0m0.596s 00:03:35.532 05:58:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.532 05:58:41 -- common/autotest_common.sh@10 -- # set +x 00:03:35.532 ************************************ 00:03:35.532 END TEST env_vtophys 00:03:35.532 ************************************ 00:03:35.532 05:58:41 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:35.532 05:58:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:35.532 05:58:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:35.532 05:58:41 -- common/autotest_common.sh@10 -- # set +x 00:03:35.532 ************************************ 00:03:35.532 START TEST env_pci 00:03:35.532 ************************************ 00:03:35.532 05:58:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:35.532 00:03:35.532 00:03:35.532 CUnit - A unit testing framework for C - Version 2.1-3 00:03:35.532 http://cunit.sourceforge.net/ 00:03:35.532 00:03:35.532 00:03:35.532 Suite: pci 00:03:35.532 Test: pci_hook ...[2024-07-13 05:58:41.945194] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 991404 has claimed it 00:03:35.532 EAL: Cannot find device (10000:00:01.0) 00:03:35.532 EAL: Failed to attach device on primary process 00:03:35.532 passed 00:03:35.532 00:03:35.532 Run Summary: Type Total Ran Passed Failed Inactive 00:03:35.532 suites 1 1 n/a 0 0 00:03:35.532 tests 1 1 1 0 0 00:03:35.532 asserts 25 25 25 0 n/a 00:03:35.532 00:03:35.532 Elapsed time = 0.021 seconds 00:03:35.532 00:03:35.532 real 0m0.034s 00:03:35.532 user 0m0.011s 00:03:35.532 sys 0m0.023s 00:03:35.532 05:58:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.532 05:58:41 -- common/autotest_common.sh@10 -- # set +x 00:03:35.532 ************************************ 00:03:35.532 END TEST env_pci 00:03:35.532 ************************************ 00:03:35.532 05:58:41 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:35.532 05:58:41 -- env/env.sh@15 -- # uname 00:03:35.532 05:58:41 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:35.532 05:58:41 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:35.532 05:58:41 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:35.532 05:58:41 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:03:35.532 05:58:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:35.532 05:58:41 -- common/autotest_common.sh@10 -- # set +x 00:03:35.532 ************************************ 00:03:35.532 START TEST env_dpdk_post_init 00:03:35.532 ************************************ 00:03:35.532 05:58:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:35.532 EAL: Detected CPU lcores: 48 00:03:35.532 EAL: Detected NUMA nodes: 2 00:03:35.532 EAL: Detected shared linkage of DPDK 00:03:35.532 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:35.532 EAL: Selected IOVA mode 'VA' 00:03:35.532 EAL: No free 2048 kB hugepages reported on node 1 00:03:35.791 EAL: VFIO support initialized 00:03:35.791 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:35.791 EAL: Using IOMMU type 1 (Type 1) 00:03:35.791 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:35.791 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:35.791 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:35.791 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:35.791 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:35.791 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:35.791 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:35.791 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:35.791 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:35.791 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:35.791 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:35.791 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:35.791 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:35.791 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:35.791 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:36.050 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:36.616 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:39.930 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:39.930 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:39.930 Starting DPDK initialization... 00:03:39.930 Starting SPDK post initialization... 00:03:39.930 SPDK NVMe probe 00:03:39.930 Attaching to 0000:88:00.0 00:03:39.930 Attached to 0000:88:00.0 00:03:39.930 Cleaning up... 00:03:39.930 00:03:39.930 real 0m4.409s 00:03:39.930 user 0m3.257s 00:03:39.930 sys 0m0.204s 00:03:39.930 05:58:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:39.930 05:58:46 -- common/autotest_common.sh@10 -- # set +x 00:03:39.930 ************************************ 00:03:39.930 END TEST env_dpdk_post_init 00:03:39.930 ************************************ 00:03:40.190 05:58:46 -- env/env.sh@26 -- # uname 00:03:40.190 05:58:46 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:40.190 05:58:46 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:40.190 05:58:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:40.190 05:58:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:40.190 05:58:46 -- common/autotest_common.sh@10 -- # set +x 00:03:40.190 ************************************ 00:03:40.190 START TEST env_mem_callbacks 00:03:40.190 ************************************ 00:03:40.190 05:58:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:40.190 EAL: Detected CPU lcores: 48 00:03:40.190 EAL: Detected NUMA nodes: 2 00:03:40.190 EAL: Detected shared linkage of DPDK 00:03:40.190 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:40.190 EAL: Selected IOVA mode 'VA' 00:03:40.190 EAL: No free 2048 kB hugepages reported on node 1 00:03:40.190 EAL: VFIO support initialized 00:03:40.190 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:40.190 00:03:40.190 00:03:40.190 CUnit - A unit testing framework for C - Version 2.1-3 00:03:40.190 http://cunit.sourceforge.net/ 00:03:40.190 00:03:40.190 00:03:40.190 Suite: memory 00:03:40.190 Test: test ... 00:03:40.190 register 0x200000200000 2097152 00:03:40.190 malloc 3145728 00:03:40.190 register 0x200000400000 4194304 00:03:40.190 buf 0x200000500000 len 3145728 PASSED 00:03:40.190 malloc 64 00:03:40.190 buf 0x2000004fff40 len 64 PASSED 00:03:40.190 malloc 4194304 00:03:40.190 register 0x200000800000 6291456 00:03:40.190 buf 0x200000a00000 len 4194304 PASSED 00:03:40.190 free 0x200000500000 3145728 00:03:40.190 free 0x2000004fff40 64 00:03:40.190 unregister 0x200000400000 4194304 PASSED 00:03:40.190 free 0x200000a00000 4194304 00:03:40.190 unregister 0x200000800000 6291456 PASSED 00:03:40.190 malloc 8388608 00:03:40.190 register 0x200000400000 10485760 00:03:40.190 buf 0x200000600000 len 8388608 PASSED 00:03:40.190 free 0x200000600000 8388608 00:03:40.190 unregister 0x200000400000 10485760 PASSED 00:03:40.190 passed 00:03:40.190 00:03:40.190 Run Summary: Type Total Ran Passed Failed Inactive 00:03:40.190 suites 1 1 n/a 0 0 00:03:40.190 tests 1 1 1 0 0 00:03:40.190 asserts 15 15 15 0 n/a 00:03:40.190 00:03:40.190 Elapsed time = 0.005 seconds 00:03:40.190 00:03:40.190 real 0m0.051s 00:03:40.190 user 0m0.016s 00:03:40.190 sys 0m0.034s 00:03:40.190 05:58:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.190 05:58:46 -- common/autotest_common.sh@10 -- # set +x 00:03:40.190 ************************************ 00:03:40.190 END TEST env_mem_callbacks 00:03:40.190 ************************************ 00:03:40.190 00:03:40.190 real 0m6.308s 00:03:40.190 user 0m4.346s 00:03:40.190 sys 0m1.004s 00:03:40.190 05:58:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.190 05:58:46 -- common/autotest_common.sh@10 -- # set +x 00:03:40.190 ************************************ 00:03:40.190 END TEST env 00:03:40.190 ************************************ 00:03:40.190 05:58:46 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:40.190 05:58:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:40.190 05:58:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:40.190 05:58:46 -- common/autotest_common.sh@10 -- # set +x 00:03:40.190 ************************************ 00:03:40.190 START TEST rpc 00:03:40.190 ************************************ 00:03:40.190 05:58:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:40.190 * Looking for test storage... 00:03:40.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:40.190 05:58:46 -- rpc/rpc.sh@65 -- # spdk_pid=992069 00:03:40.190 05:58:46 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:40.190 05:58:46 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:40.190 05:58:46 -- rpc/rpc.sh@67 -- # waitforlisten 992069 00:03:40.190 05:58:46 -- common/autotest_common.sh@819 -- # '[' -z 992069 ']' 00:03:40.190 05:58:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:40.190 05:58:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:40.190 05:58:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:40.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:40.190 05:58:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:40.190 05:58:46 -- common/autotest_common.sh@10 -- # set +x 00:03:40.190 [2024-07-13 05:58:46.622210] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:40.190 [2024-07-13 05:58:46.622306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid992069 ] 00:03:40.190 EAL: No free 2048 kB hugepages reported on node 1 00:03:40.190 [2024-07-13 05:58:46.679211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:40.448 [2024-07-13 05:58:46.784717] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:40.448 [2024-07-13 05:58:46.784898] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:40.448 [2024-07-13 05:58:46.784921] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 992069' to capture a snapshot of events at runtime. 00:03:40.448 [2024-07-13 05:58:46.784941] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid992069 for offline analysis/debug. 00:03:40.448 [2024-07-13 05:58:46.784980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.382 05:58:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:41.382 05:58:47 -- common/autotest_common.sh@852 -- # return 0 00:03:41.382 05:58:47 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:41.382 05:58:47 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:41.382 05:58:47 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:41.382 05:58:47 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:41.382 05:58:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.382 05:58:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.382 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.382 ************************************ 00:03:41.382 START TEST rpc_integrity 00:03:41.382 ************************************ 00:03:41.382 05:58:47 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:03:41.382 05:58:47 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:41.382 05:58:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.382 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.382 05:58:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.382 05:58:47 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:41.382 05:58:47 -- rpc/rpc.sh@13 -- # jq length 00:03:41.382 05:58:47 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:41.382 05:58:47 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:41.382 05:58:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.382 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.382 05:58:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.382 05:58:47 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:41.382 05:58:47 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:41.382 05:58:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.382 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.382 05:58:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.382 05:58:47 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:41.382 { 00:03:41.382 "name": "Malloc0", 00:03:41.382 "aliases": [ 00:03:41.382 "83740ac3-15ac-4e74-9883-641167563f9f" 00:03:41.382 ], 00:03:41.382 "product_name": "Malloc disk", 00:03:41.382 "block_size": 512, 00:03:41.382 "num_blocks": 16384, 00:03:41.382 "uuid": "83740ac3-15ac-4e74-9883-641167563f9f", 00:03:41.382 "assigned_rate_limits": { 00:03:41.382 "rw_ios_per_sec": 0, 00:03:41.382 "rw_mbytes_per_sec": 0, 00:03:41.382 "r_mbytes_per_sec": 0, 00:03:41.382 "w_mbytes_per_sec": 0 00:03:41.382 }, 00:03:41.382 "claimed": false, 00:03:41.382 "zoned": false, 00:03:41.382 "supported_io_types": { 00:03:41.382 "read": true, 00:03:41.382 "write": true, 00:03:41.382 "unmap": true, 00:03:41.382 "write_zeroes": true, 00:03:41.382 "flush": true, 00:03:41.382 "reset": true, 00:03:41.382 "compare": false, 00:03:41.382 "compare_and_write": false, 00:03:41.382 "abort": true, 00:03:41.382 "nvme_admin": false, 00:03:41.382 "nvme_io": false 00:03:41.382 }, 00:03:41.382 "memory_domains": [ 00:03:41.382 { 00:03:41.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.382 "dma_device_type": 2 00:03:41.382 } 00:03:41.382 ], 00:03:41.382 "driver_specific": {} 00:03:41.382 } 00:03:41.382 ]' 00:03:41.382 05:58:47 -- rpc/rpc.sh@17 -- # jq length 00:03:41.382 05:58:47 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:41.382 05:58:47 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:41.382 05:58:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.382 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.382 [2024-07-13 05:58:47.688672] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:41.382 [2024-07-13 05:58:47.688724] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:41.382 [2024-07-13 05:58:47.688748] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9cef70 00:03:41.382 [2024-07-13 05:58:47.688764] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:41.383 [2024-07-13 05:58:47.690318] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:41.383 [2024-07-13 05:58:47.690348] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:41.383 Passthru0 00:03:41.383 05:58:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.383 05:58:47 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:41.383 05:58:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.383 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.383 05:58:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.383 05:58:47 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:41.383 { 00:03:41.383 "name": "Malloc0", 00:03:41.383 "aliases": [ 00:03:41.383 "83740ac3-15ac-4e74-9883-641167563f9f" 00:03:41.383 ], 00:03:41.383 "product_name": "Malloc disk", 00:03:41.383 "block_size": 512, 00:03:41.383 "num_blocks": 16384, 00:03:41.383 "uuid": "83740ac3-15ac-4e74-9883-641167563f9f", 00:03:41.383 "assigned_rate_limits": { 00:03:41.383 "rw_ios_per_sec": 0, 00:03:41.383 "rw_mbytes_per_sec": 0, 00:03:41.383 "r_mbytes_per_sec": 0, 00:03:41.383 "w_mbytes_per_sec": 0 00:03:41.383 }, 00:03:41.383 "claimed": true, 00:03:41.383 "claim_type": "exclusive_write", 00:03:41.383 "zoned": false, 00:03:41.383 "supported_io_types": { 00:03:41.383 "read": true, 00:03:41.383 "write": true, 00:03:41.383 "unmap": true, 00:03:41.383 "write_zeroes": true, 00:03:41.383 "flush": true, 00:03:41.383 "reset": true, 00:03:41.383 "compare": false, 00:03:41.383 "compare_and_write": false, 00:03:41.383 "abort": true, 00:03:41.383 "nvme_admin": false, 00:03:41.383 "nvme_io": false 00:03:41.383 }, 00:03:41.383 "memory_domains": [ 00:03:41.383 { 00:03:41.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.383 "dma_device_type": 2 00:03:41.383 } 00:03:41.383 ], 00:03:41.383 "driver_specific": {} 00:03:41.383 }, 00:03:41.383 { 00:03:41.383 "name": "Passthru0", 00:03:41.383 "aliases": [ 00:03:41.383 "0a958e26-aa90-547b-acdc-1b76d77da1d8" 00:03:41.383 ], 00:03:41.383 "product_name": "passthru", 00:03:41.383 "block_size": 512, 00:03:41.383 "num_blocks": 16384, 00:03:41.383 "uuid": "0a958e26-aa90-547b-acdc-1b76d77da1d8", 00:03:41.383 "assigned_rate_limits": { 00:03:41.383 "rw_ios_per_sec": 0, 00:03:41.383 "rw_mbytes_per_sec": 0, 00:03:41.383 "r_mbytes_per_sec": 0, 00:03:41.383 "w_mbytes_per_sec": 0 00:03:41.383 }, 00:03:41.383 "claimed": false, 00:03:41.383 "zoned": false, 00:03:41.383 "supported_io_types": { 00:03:41.383 "read": true, 00:03:41.383 "write": true, 00:03:41.383 "unmap": true, 00:03:41.383 "write_zeroes": true, 00:03:41.383 "flush": true, 00:03:41.383 "reset": true, 00:03:41.383 "compare": false, 00:03:41.383 "compare_and_write": false, 00:03:41.383 "abort": true, 00:03:41.383 "nvme_admin": false, 00:03:41.383 "nvme_io": false 00:03:41.383 }, 00:03:41.383 "memory_domains": [ 00:03:41.383 { 00:03:41.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.383 "dma_device_type": 2 00:03:41.383 } 00:03:41.383 ], 00:03:41.383 "driver_specific": { 00:03:41.383 "passthru": { 00:03:41.383 "name": "Passthru0", 00:03:41.383 "base_bdev_name": "Malloc0" 00:03:41.383 } 00:03:41.383 } 00:03:41.383 } 00:03:41.383 ]' 00:03:41.383 05:58:47 -- rpc/rpc.sh@21 -- # jq length 00:03:41.383 05:58:47 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:41.383 05:58:47 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:41.383 05:58:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.383 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.383 05:58:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.383 05:58:47 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:41.383 05:58:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.383 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.383 05:58:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.383 05:58:47 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:41.383 05:58:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.383 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.383 05:58:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.383 05:58:47 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:41.383 05:58:47 -- rpc/rpc.sh@26 -- # jq length 00:03:41.383 05:58:47 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:41.383 00:03:41.383 real 0m0.235s 00:03:41.383 user 0m0.155s 00:03:41.383 sys 0m0.024s 00:03:41.383 05:58:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.383 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.383 ************************************ 00:03:41.383 END TEST rpc_integrity 00:03:41.383 ************************************ 00:03:41.383 05:58:47 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:41.383 05:58:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.383 05:58:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.383 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.383 ************************************ 00:03:41.383 START TEST rpc_plugins 00:03:41.383 ************************************ 00:03:41.383 05:58:47 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:03:41.383 05:58:47 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:41.383 05:58:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.383 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.383 05:58:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.383 05:58:47 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:41.383 05:58:47 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:41.383 05:58:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.383 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.383 05:58:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.383 05:58:47 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:41.383 { 00:03:41.383 "name": "Malloc1", 00:03:41.383 "aliases": [ 00:03:41.383 "11bce120-e01b-41c2-91c8-ff2e4be6ea40" 00:03:41.383 ], 00:03:41.383 "product_name": "Malloc disk", 00:03:41.383 "block_size": 4096, 00:03:41.383 "num_blocks": 256, 00:03:41.383 "uuid": "11bce120-e01b-41c2-91c8-ff2e4be6ea40", 00:03:41.383 "assigned_rate_limits": { 00:03:41.383 "rw_ios_per_sec": 0, 00:03:41.383 "rw_mbytes_per_sec": 0, 00:03:41.383 "r_mbytes_per_sec": 0, 00:03:41.383 "w_mbytes_per_sec": 0 00:03:41.383 }, 00:03:41.383 "claimed": false, 00:03:41.383 "zoned": false, 00:03:41.383 "supported_io_types": { 00:03:41.383 "read": true, 00:03:41.383 "write": true, 00:03:41.383 "unmap": true, 00:03:41.383 "write_zeroes": true, 00:03:41.383 "flush": true, 00:03:41.383 "reset": true, 00:03:41.383 "compare": false, 00:03:41.383 "compare_and_write": false, 00:03:41.383 "abort": true, 00:03:41.383 "nvme_admin": false, 00:03:41.383 "nvme_io": false 00:03:41.383 }, 00:03:41.383 "memory_domains": [ 00:03:41.383 { 00:03:41.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.383 "dma_device_type": 2 00:03:41.383 } 00:03:41.383 ], 00:03:41.383 "driver_specific": {} 00:03:41.383 } 00:03:41.383 ]' 00:03:41.383 05:58:47 -- rpc/rpc.sh@32 -- # jq length 00:03:41.641 05:58:47 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:41.641 05:58:47 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:41.641 05:58:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.641 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.641 05:58:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.641 05:58:47 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:41.641 05:58:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.641 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.641 05:58:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.641 05:58:47 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:41.641 05:58:47 -- rpc/rpc.sh@36 -- # jq length 00:03:41.641 05:58:47 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:41.641 00:03:41.641 real 0m0.113s 00:03:41.641 user 0m0.071s 00:03:41.641 sys 0m0.011s 00:03:41.641 05:58:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.641 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.641 ************************************ 00:03:41.641 END TEST rpc_plugins 00:03:41.641 ************************************ 00:03:41.641 05:58:47 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:41.641 05:58:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.641 05:58:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.641 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.641 ************************************ 00:03:41.641 START TEST rpc_trace_cmd_test 00:03:41.641 ************************************ 00:03:41.641 05:58:47 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:03:41.641 05:58:47 -- rpc/rpc.sh@40 -- # local info 00:03:41.641 05:58:47 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:41.641 05:58:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.641 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:03:41.641 05:58:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.641 05:58:47 -- rpc/rpc.sh@42 -- # info='{ 00:03:41.641 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid992069", 00:03:41.641 "tpoint_group_mask": "0x8", 00:03:41.641 "iscsi_conn": { 00:03:41.641 "mask": "0x2", 00:03:41.641 "tpoint_mask": "0x0" 00:03:41.641 }, 00:03:41.641 "scsi": { 00:03:41.641 "mask": "0x4", 00:03:41.641 "tpoint_mask": "0x0" 00:03:41.641 }, 00:03:41.641 "bdev": { 00:03:41.641 "mask": "0x8", 00:03:41.641 "tpoint_mask": "0xffffffffffffffff" 00:03:41.641 }, 00:03:41.641 "nvmf_rdma": { 00:03:41.641 "mask": "0x10", 00:03:41.641 "tpoint_mask": "0x0" 00:03:41.641 }, 00:03:41.641 "nvmf_tcp": { 00:03:41.641 "mask": "0x20", 00:03:41.641 "tpoint_mask": "0x0" 00:03:41.641 }, 00:03:41.641 "ftl": { 00:03:41.641 "mask": "0x40", 00:03:41.641 "tpoint_mask": "0x0" 00:03:41.641 }, 00:03:41.641 "blobfs": { 00:03:41.641 "mask": "0x80", 00:03:41.641 "tpoint_mask": "0x0" 00:03:41.641 }, 00:03:41.641 "dsa": { 00:03:41.641 "mask": "0x200", 00:03:41.641 "tpoint_mask": "0x0" 00:03:41.641 }, 00:03:41.642 "thread": { 00:03:41.642 "mask": "0x400", 00:03:41.642 "tpoint_mask": "0x0" 00:03:41.642 }, 00:03:41.642 "nvme_pcie": { 00:03:41.642 "mask": "0x800", 00:03:41.642 "tpoint_mask": "0x0" 00:03:41.642 }, 00:03:41.642 "iaa": { 00:03:41.642 "mask": "0x1000", 00:03:41.642 "tpoint_mask": "0x0" 00:03:41.642 }, 00:03:41.642 "nvme_tcp": { 00:03:41.642 "mask": "0x2000", 00:03:41.642 "tpoint_mask": "0x0" 00:03:41.642 }, 00:03:41.642 "bdev_nvme": { 00:03:41.642 "mask": "0x4000", 00:03:41.642 "tpoint_mask": "0x0" 00:03:41.642 } 00:03:41.642 }' 00:03:41.642 05:58:47 -- rpc/rpc.sh@43 -- # jq length 00:03:41.642 05:58:48 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:03:41.642 05:58:48 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:41.642 05:58:48 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:41.642 05:58:48 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:41.642 05:58:48 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:41.642 05:58:48 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:41.642 05:58:48 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:41.642 05:58:48 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:41.900 05:58:48 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:41.900 00:03:41.900 real 0m0.194s 00:03:41.900 user 0m0.174s 00:03:41.900 sys 0m0.012s 00:03:41.900 05:58:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:41.900 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:03:41.900 ************************************ 00:03:41.900 END TEST rpc_trace_cmd_test 00:03:41.900 ************************************ 00:03:41.900 05:58:48 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:41.900 05:58:48 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:41.900 05:58:48 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:41.900 05:58:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:41.900 05:58:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:41.900 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:03:41.900 ************************************ 00:03:41.900 START TEST rpc_daemon_integrity 00:03:41.900 ************************************ 00:03:41.900 05:58:48 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:03:41.900 05:58:48 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:41.900 05:58:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.900 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:03:41.900 05:58:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.900 05:58:48 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:41.900 05:58:48 -- rpc/rpc.sh@13 -- # jq length 00:03:41.900 05:58:48 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:41.900 05:58:48 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:41.900 05:58:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.900 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:03:41.900 05:58:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.900 05:58:48 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:41.900 05:58:48 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:41.900 05:58:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.900 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:03:41.900 05:58:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.900 05:58:48 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:41.900 { 00:03:41.900 "name": "Malloc2", 00:03:41.900 "aliases": [ 00:03:41.900 "687ddac6-2664-4aca-a51e-50a87f80ee0c" 00:03:41.900 ], 00:03:41.900 "product_name": "Malloc disk", 00:03:41.900 "block_size": 512, 00:03:41.900 "num_blocks": 16384, 00:03:41.900 "uuid": "687ddac6-2664-4aca-a51e-50a87f80ee0c", 00:03:41.900 "assigned_rate_limits": { 00:03:41.900 "rw_ios_per_sec": 0, 00:03:41.900 "rw_mbytes_per_sec": 0, 00:03:41.900 "r_mbytes_per_sec": 0, 00:03:41.900 "w_mbytes_per_sec": 0 00:03:41.900 }, 00:03:41.900 "claimed": false, 00:03:41.900 "zoned": false, 00:03:41.900 "supported_io_types": { 00:03:41.900 "read": true, 00:03:41.900 "write": true, 00:03:41.900 "unmap": true, 00:03:41.900 "write_zeroes": true, 00:03:41.900 "flush": true, 00:03:41.900 "reset": true, 00:03:41.900 "compare": false, 00:03:41.900 "compare_and_write": false, 00:03:41.900 "abort": true, 00:03:41.900 "nvme_admin": false, 00:03:41.900 "nvme_io": false 00:03:41.900 }, 00:03:41.900 "memory_domains": [ 00:03:41.900 { 00:03:41.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.900 "dma_device_type": 2 00:03:41.900 } 00:03:41.900 ], 00:03:41.900 "driver_specific": {} 00:03:41.900 } 00:03:41.900 ]' 00:03:41.900 05:58:48 -- rpc/rpc.sh@17 -- # jq length 00:03:41.900 05:58:48 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:41.900 05:58:48 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:41.900 05:58:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.900 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:03:41.900 [2024-07-13 05:58:48.306406] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:41.900 [2024-07-13 05:58:48.306454] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:41.900 [2024-07-13 05:58:48.306478] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb6e970 00:03:41.900 [2024-07-13 05:58:48.306494] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:41.900 [2024-07-13 05:58:48.307856] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:41.900 [2024-07-13 05:58:48.307895] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:41.900 Passthru0 00:03:41.900 05:58:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.900 05:58:48 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:41.900 05:58:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.900 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:03:41.900 05:58:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.900 05:58:48 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:41.900 { 00:03:41.900 "name": "Malloc2", 00:03:41.900 "aliases": [ 00:03:41.900 "687ddac6-2664-4aca-a51e-50a87f80ee0c" 00:03:41.900 ], 00:03:41.900 "product_name": "Malloc disk", 00:03:41.900 "block_size": 512, 00:03:41.900 "num_blocks": 16384, 00:03:41.900 "uuid": "687ddac6-2664-4aca-a51e-50a87f80ee0c", 00:03:41.900 "assigned_rate_limits": { 00:03:41.900 "rw_ios_per_sec": 0, 00:03:41.900 "rw_mbytes_per_sec": 0, 00:03:41.900 "r_mbytes_per_sec": 0, 00:03:41.900 "w_mbytes_per_sec": 0 00:03:41.900 }, 00:03:41.900 "claimed": true, 00:03:41.900 "claim_type": "exclusive_write", 00:03:41.900 "zoned": false, 00:03:41.900 "supported_io_types": { 00:03:41.900 "read": true, 00:03:41.900 "write": true, 00:03:41.900 "unmap": true, 00:03:41.900 "write_zeroes": true, 00:03:41.900 "flush": true, 00:03:41.900 "reset": true, 00:03:41.900 "compare": false, 00:03:41.900 "compare_and_write": false, 00:03:41.900 "abort": true, 00:03:41.900 "nvme_admin": false, 00:03:41.900 "nvme_io": false 00:03:41.900 }, 00:03:41.900 "memory_domains": [ 00:03:41.900 { 00:03:41.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.900 "dma_device_type": 2 00:03:41.900 } 00:03:41.900 ], 00:03:41.900 "driver_specific": {} 00:03:41.900 }, 00:03:41.900 { 00:03:41.900 "name": "Passthru0", 00:03:41.900 "aliases": [ 00:03:41.900 "b63e7157-f44e-577a-beb1-b12dade503e4" 00:03:41.900 ], 00:03:41.900 "product_name": "passthru", 00:03:41.900 "block_size": 512, 00:03:41.900 "num_blocks": 16384, 00:03:41.900 "uuid": "b63e7157-f44e-577a-beb1-b12dade503e4", 00:03:41.900 "assigned_rate_limits": { 00:03:41.900 "rw_ios_per_sec": 0, 00:03:41.900 "rw_mbytes_per_sec": 0, 00:03:41.900 "r_mbytes_per_sec": 0, 00:03:41.900 "w_mbytes_per_sec": 0 00:03:41.900 }, 00:03:41.901 "claimed": false, 00:03:41.901 "zoned": false, 00:03:41.901 "supported_io_types": { 00:03:41.901 "read": true, 00:03:41.901 "write": true, 00:03:41.901 "unmap": true, 00:03:41.901 "write_zeroes": true, 00:03:41.901 "flush": true, 00:03:41.901 "reset": true, 00:03:41.901 "compare": false, 00:03:41.901 "compare_and_write": false, 00:03:41.901 "abort": true, 00:03:41.901 "nvme_admin": false, 00:03:41.901 "nvme_io": false 00:03:41.901 }, 00:03:41.901 "memory_domains": [ 00:03:41.901 { 00:03:41.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:41.901 "dma_device_type": 2 00:03:41.901 } 00:03:41.901 ], 00:03:41.901 "driver_specific": { 00:03:41.901 "passthru": { 00:03:41.901 "name": "Passthru0", 00:03:41.901 "base_bdev_name": "Malloc2" 00:03:41.901 } 00:03:41.901 } 00:03:41.901 } 00:03:41.901 ]' 00:03:41.901 05:58:48 -- rpc/rpc.sh@21 -- # jq length 00:03:41.901 05:58:48 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:41.901 05:58:48 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:41.901 05:58:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.901 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:03:41.901 05:58:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.901 05:58:48 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:41.901 05:58:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.901 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:03:41.901 05:58:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.901 05:58:48 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:41.901 05:58:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:41.901 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:03:41.901 05:58:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:41.901 05:58:48 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:41.901 05:58:48 -- rpc/rpc.sh@26 -- # jq length 00:03:42.159 05:58:48 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:42.159 00:03:42.159 real 0m0.225s 00:03:42.159 user 0m0.152s 00:03:42.159 sys 0m0.015s 00:03:42.159 05:58:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.159 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:03:42.159 ************************************ 00:03:42.159 END TEST rpc_daemon_integrity 00:03:42.159 ************************************ 00:03:42.159 05:58:48 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:42.159 05:58:48 -- rpc/rpc.sh@84 -- # killprocess 992069 00:03:42.159 05:58:48 -- common/autotest_common.sh@926 -- # '[' -z 992069 ']' 00:03:42.159 05:58:48 -- common/autotest_common.sh@930 -- # kill -0 992069 00:03:42.159 05:58:48 -- common/autotest_common.sh@931 -- # uname 00:03:42.159 05:58:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:03:42.159 05:58:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 992069 00:03:42.159 05:58:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:03:42.159 05:58:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:03:42.159 05:58:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 992069' 00:03:42.159 killing process with pid 992069 00:03:42.159 05:58:48 -- common/autotest_common.sh@945 -- # kill 992069 00:03:42.159 05:58:48 -- common/autotest_common.sh@950 -- # wait 992069 00:03:42.417 00:03:42.417 real 0m2.396s 00:03:42.417 user 0m3.068s 00:03:42.417 sys 0m0.560s 00:03:42.417 05:58:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.417 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:03:42.417 ************************************ 00:03:42.417 END TEST rpc 00:03:42.417 ************************************ 00:03:42.675 05:58:48 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:42.675 05:58:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:42.675 05:58:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:42.675 05:58:48 -- common/autotest_common.sh@10 -- # set +x 00:03:42.675 ************************************ 00:03:42.675 START TEST rpc_client 00:03:42.675 ************************************ 00:03:42.675 05:58:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:42.675 * Looking for test storage... 00:03:42.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:42.675 05:58:49 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:42.675 OK 00:03:42.675 05:58:49 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:42.675 00:03:42.675 real 0m0.066s 00:03:42.675 user 0m0.027s 00:03:42.675 sys 0m0.044s 00:03:42.675 05:58:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.675 05:58:49 -- common/autotest_common.sh@10 -- # set +x 00:03:42.675 ************************************ 00:03:42.675 END TEST rpc_client 00:03:42.675 ************************************ 00:03:42.675 05:58:49 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:42.675 05:58:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:42.675 05:58:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:42.675 05:58:49 -- common/autotest_common.sh@10 -- # set +x 00:03:42.676 ************************************ 00:03:42.676 START TEST json_config 00:03:42.676 ************************************ 00:03:42.676 05:58:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:42.676 05:58:49 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:42.676 05:58:49 -- nvmf/common.sh@7 -- # uname -s 00:03:42.676 05:58:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:42.676 05:58:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:42.676 05:58:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:42.676 05:58:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:42.676 05:58:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:42.676 05:58:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:42.676 05:58:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:42.676 05:58:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:42.676 05:58:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:42.676 05:58:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:42.676 05:58:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:42.676 05:58:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:42.676 05:58:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:42.676 05:58:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:42.676 05:58:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:42.676 05:58:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:42.676 05:58:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:42.676 05:58:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:42.676 05:58:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:42.676 05:58:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.676 05:58:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.676 05:58:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.676 05:58:49 -- paths/export.sh@5 -- # export PATH 00:03:42.676 05:58:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:42.676 05:58:49 -- nvmf/common.sh@46 -- # : 0 00:03:42.676 05:58:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:42.676 05:58:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:42.676 05:58:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:42.676 05:58:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:42.676 05:58:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:42.676 05:58:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:42.676 05:58:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:42.676 05:58:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:42.676 05:58:49 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:03:42.676 05:58:49 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:03:42.676 05:58:49 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:03:42.676 05:58:49 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:42.676 05:58:49 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:03:42.676 05:58:49 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:03:42.676 05:58:49 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:42.676 05:58:49 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:03:42.676 05:58:49 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:42.676 05:58:49 -- json_config/json_config.sh@32 -- # declare -A app_params 00:03:42.676 05:58:49 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:42.676 05:58:49 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:03:42.676 05:58:49 -- json_config/json_config.sh@43 -- # last_event_id=0 00:03:42.676 05:58:49 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:42.676 05:58:49 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:03:42.676 INFO: JSON configuration test init 00:03:42.676 05:58:49 -- json_config/json_config.sh@420 -- # json_config_test_init 00:03:42.676 05:58:49 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:03:42.676 05:58:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:42.676 05:58:49 -- common/autotest_common.sh@10 -- # set +x 00:03:42.676 05:58:49 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:03:42.676 05:58:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:42.676 05:58:49 -- common/autotest_common.sh@10 -- # set +x 00:03:42.676 05:58:49 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:03:42.676 05:58:49 -- json_config/json_config.sh@98 -- # local app=target 00:03:42.676 05:58:49 -- json_config/json_config.sh@99 -- # shift 00:03:42.676 05:58:49 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:03:42.676 05:58:49 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:03:42.676 05:58:49 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:03:42.676 05:58:49 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:42.676 05:58:49 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:42.676 05:58:49 -- json_config/json_config.sh@111 -- # app_pid[$app]=992549 00:03:42.676 05:58:49 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:42.676 05:58:49 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:03:42.676 Waiting for target to run... 00:03:42.676 05:58:49 -- json_config/json_config.sh@114 -- # waitforlisten 992549 /var/tmp/spdk_tgt.sock 00:03:42.676 05:58:49 -- common/autotest_common.sh@819 -- # '[' -z 992549 ']' 00:03:42.676 05:58:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:42.676 05:58:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:42.676 05:58:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:42.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:42.676 05:58:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:42.676 05:58:49 -- common/autotest_common.sh@10 -- # set +x 00:03:42.676 [2024-07-13 05:58:49.148525] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:42.676 [2024-07-13 05:58:49.148629] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid992549 ] 00:03:42.676 EAL: No free 2048 kB hugepages reported on node 1 00:03:43.257 [2024-07-13 05:58:49.653761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.257 [2024-07-13 05:58:49.760758] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:43.257 [2024-07-13 05:58:49.760961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:43.825 05:58:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:43.825 05:58:50 -- common/autotest_common.sh@852 -- # return 0 00:03:43.825 05:58:50 -- json_config/json_config.sh@115 -- # echo '' 00:03:43.825 00:03:43.826 05:58:50 -- json_config/json_config.sh@322 -- # create_accel_config 00:03:43.826 05:58:50 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:03:43.826 05:58:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:43.826 05:58:50 -- common/autotest_common.sh@10 -- # set +x 00:03:43.826 05:58:50 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:03:43.826 05:58:50 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:03:43.826 05:58:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:43.826 05:58:50 -- common/autotest_common.sh@10 -- # set +x 00:03:43.826 05:58:50 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:43.826 05:58:50 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:03:43.826 05:58:50 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:47.106 05:58:53 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:03:47.106 05:58:53 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:03:47.106 05:58:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:47.106 05:58:53 -- common/autotest_common.sh@10 -- # set +x 00:03:47.106 05:58:53 -- json_config/json_config.sh@48 -- # local ret=0 00:03:47.106 05:58:53 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:47.106 05:58:53 -- json_config/json_config.sh@49 -- # local enabled_types 00:03:47.106 05:58:53 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:47.106 05:58:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:47.106 05:58:53 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:47.106 05:58:53 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:47.106 05:58:53 -- json_config/json_config.sh@51 -- # local get_types 00:03:47.106 05:58:53 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:47.106 05:58:53 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:03:47.106 05:58:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:47.106 05:58:53 -- common/autotest_common.sh@10 -- # set +x 00:03:47.106 05:58:53 -- json_config/json_config.sh@58 -- # return 0 00:03:47.106 05:58:53 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:03:47.106 05:58:53 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:03:47.106 05:58:53 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:03:47.106 05:58:53 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:03:47.106 05:58:53 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:03:47.106 05:58:53 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:03:47.106 05:58:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:47.106 05:58:53 -- common/autotest_common.sh@10 -- # set +x 00:03:47.106 05:58:53 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:47.106 05:58:53 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:03:47.106 05:58:53 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:03:47.106 05:58:53 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:47.106 05:58:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:47.363 MallocForNvmf0 00:03:47.363 05:58:53 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:47.363 05:58:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:47.619 MallocForNvmf1 00:03:47.619 05:58:53 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:47.619 05:58:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:47.876 [2024-07-13 05:58:54.225443] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:47.876 05:58:54 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:47.876 05:58:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:48.134 05:58:54 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:48.134 05:58:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:48.390 05:58:54 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:48.390 05:58:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:48.647 05:58:54 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:48.647 05:58:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:48.905 [2024-07-13 05:58:55.168517] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:48.905 05:58:55 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:03:48.905 05:58:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:48.905 05:58:55 -- common/autotest_common.sh@10 -- # set +x 00:03:48.905 05:58:55 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:03:48.905 05:58:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:48.905 05:58:55 -- common/autotest_common.sh@10 -- # set +x 00:03:48.905 05:58:55 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:03:48.905 05:58:55 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:48.905 05:58:55 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:49.162 MallocBdevForConfigChangeCheck 00:03:49.162 05:58:55 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:03:49.162 05:58:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:49.162 05:58:55 -- common/autotest_common.sh@10 -- # set +x 00:03:49.162 05:58:55 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:03:49.162 05:58:55 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:49.419 05:58:55 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:03:49.419 INFO: shutting down applications... 00:03:49.419 05:58:55 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:03:49.419 05:58:55 -- json_config/json_config.sh@431 -- # json_config_clear target 00:03:49.419 05:58:55 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:03:49.419 05:58:55 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:51.318 Calling clear_iscsi_subsystem 00:03:51.318 Calling clear_nvmf_subsystem 00:03:51.318 Calling clear_nbd_subsystem 00:03:51.318 Calling clear_ublk_subsystem 00:03:51.318 Calling clear_vhost_blk_subsystem 00:03:51.318 Calling clear_vhost_scsi_subsystem 00:03:51.318 Calling clear_scheduler_subsystem 00:03:51.318 Calling clear_bdev_subsystem 00:03:51.318 Calling clear_accel_subsystem 00:03:51.318 Calling clear_vmd_subsystem 00:03:51.318 Calling clear_sock_subsystem 00:03:51.318 Calling clear_iobuf_subsystem 00:03:51.318 05:58:57 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:51.318 05:58:57 -- json_config/json_config.sh@396 -- # count=100 00:03:51.318 05:58:57 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:03:51.318 05:58:57 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:51.318 05:58:57 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:51.318 05:58:57 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:51.318 05:58:57 -- json_config/json_config.sh@398 -- # break 00:03:51.318 05:58:57 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:03:51.318 05:58:57 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:03:51.318 05:58:57 -- json_config/json_config.sh@120 -- # local app=target 00:03:51.318 05:58:57 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:03:51.318 05:58:57 -- json_config/json_config.sh@124 -- # [[ -n 992549 ]] 00:03:51.318 05:58:57 -- json_config/json_config.sh@127 -- # kill -SIGINT 992549 00:03:51.318 05:58:57 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:03:51.318 05:58:57 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:03:51.318 05:58:57 -- json_config/json_config.sh@130 -- # kill -0 992549 00:03:51.318 05:58:57 -- json_config/json_config.sh@134 -- # sleep 0.5 00:03:51.883 05:58:58 -- json_config/json_config.sh@129 -- # (( i++ )) 00:03:51.883 05:58:58 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:03:51.883 05:58:58 -- json_config/json_config.sh@130 -- # kill -0 992549 00:03:51.883 05:58:58 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:03:51.883 05:58:58 -- json_config/json_config.sh@132 -- # break 00:03:51.883 05:58:58 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:03:51.883 05:58:58 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:03:51.883 SPDK target shutdown done 00:03:51.883 05:58:58 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:03:51.883 INFO: relaunching applications... 00:03:51.883 05:58:58 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:51.883 05:58:58 -- json_config/json_config.sh@98 -- # local app=target 00:03:51.883 05:58:58 -- json_config/json_config.sh@99 -- # shift 00:03:51.883 05:58:58 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:03:51.883 05:58:58 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:03:51.883 05:58:58 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:03:51.883 05:58:58 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:51.883 05:58:58 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:51.883 05:58:58 -- json_config/json_config.sh@111 -- # app_pid[$app]=993769 00:03:51.883 05:58:58 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:51.883 05:58:58 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:03:51.883 Waiting for target to run... 00:03:51.883 05:58:58 -- json_config/json_config.sh@114 -- # waitforlisten 993769 /var/tmp/spdk_tgt.sock 00:03:51.883 05:58:58 -- common/autotest_common.sh@819 -- # '[' -z 993769 ']' 00:03:51.883 05:58:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:51.883 05:58:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:51.883 05:58:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:51.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:51.883 05:58:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:51.883 05:58:58 -- common/autotest_common.sh@10 -- # set +x 00:03:51.883 [2024-07-13 05:58:58.379459] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:51.883 [2024-07-13 05:58:58.379543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid993769 ] 00:03:52.139 EAL: No free 2048 kB hugepages reported on node 1 00:03:52.396 [2024-07-13 05:58:58.893363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:52.654 [2024-07-13 05:58:58.998347] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:52.654 [2024-07-13 05:58:58.998532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.955 [2024-07-13 05:59:02.034992] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:55.955 [2024-07-13 05:59:02.067431] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:55.955 05:59:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:55.955 05:59:02 -- common/autotest_common.sh@852 -- # return 0 00:03:55.955 05:59:02 -- json_config/json_config.sh@115 -- # echo '' 00:03:55.955 00:03:55.955 05:59:02 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:03:55.955 05:59:02 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:55.955 INFO: Checking if target configuration is the same... 00:03:55.955 05:59:02 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.955 05:59:02 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:03:55.955 05:59:02 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:55.955 + '[' 2 -ne 2 ']' 00:03:55.955 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:55.955 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:55.955 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:55.955 +++ basename /dev/fd/62 00:03:55.955 ++ mktemp /tmp/62.XXX 00:03:55.955 + tmp_file_1=/tmp/62.Rhg 00:03:55.955 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:55.955 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:55.955 + tmp_file_2=/tmp/spdk_tgt_config.json.sdc 00:03:55.955 + ret=0 00:03:55.955 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.213 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.213 + diff -u /tmp/62.Rhg /tmp/spdk_tgt_config.json.sdc 00:03:56.213 + echo 'INFO: JSON config files are the same' 00:03:56.213 INFO: JSON config files are the same 00:03:56.213 + rm /tmp/62.Rhg /tmp/spdk_tgt_config.json.sdc 00:03:56.213 + exit 0 00:03:56.213 05:59:02 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:03:56.213 05:59:02 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:56.213 INFO: changing configuration and checking if this can be detected... 00:03:56.213 05:59:02 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:56.213 05:59:02 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:56.484 05:59:02 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.484 05:59:02 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:03:56.484 05:59:02 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.484 + '[' 2 -ne 2 ']' 00:03:56.484 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:56.484 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:56.484 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:56.484 +++ basename /dev/fd/62 00:03:56.484 ++ mktemp /tmp/62.XXX 00:03:56.484 + tmp_file_1=/tmp/62.qsA 00:03:56.484 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.484 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:56.484 + tmp_file_2=/tmp/spdk_tgt_config.json.nlZ 00:03:56.484 + ret=0 00:03:56.484 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.761 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:56.761 + diff -u /tmp/62.qsA /tmp/spdk_tgt_config.json.nlZ 00:03:56.761 + ret=1 00:03:56.761 + echo '=== Start of file: /tmp/62.qsA ===' 00:03:56.761 + cat /tmp/62.qsA 00:03:56.761 + echo '=== End of file: /tmp/62.qsA ===' 00:03:56.761 + echo '' 00:03:56.761 + echo '=== Start of file: /tmp/spdk_tgt_config.json.nlZ ===' 00:03:56.761 + cat /tmp/spdk_tgt_config.json.nlZ 00:03:56.761 + echo '=== End of file: /tmp/spdk_tgt_config.json.nlZ ===' 00:03:56.761 + echo '' 00:03:56.761 + rm /tmp/62.qsA /tmp/spdk_tgt_config.json.nlZ 00:03:56.761 + exit 1 00:03:56.761 05:59:03 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:03:56.761 INFO: configuration change detected. 00:03:56.761 05:59:03 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:03:56.761 05:59:03 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:03:56.761 05:59:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:56.761 05:59:03 -- common/autotest_common.sh@10 -- # set +x 00:03:56.761 05:59:03 -- json_config/json_config.sh@360 -- # local ret=0 00:03:56.761 05:59:03 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:03:56.761 05:59:03 -- json_config/json_config.sh@370 -- # [[ -n 993769 ]] 00:03:56.761 05:59:03 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:03:56.761 05:59:03 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:03:56.761 05:59:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:56.761 05:59:03 -- common/autotest_common.sh@10 -- # set +x 00:03:56.761 05:59:03 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:03:56.761 05:59:03 -- json_config/json_config.sh@246 -- # uname -s 00:03:56.761 05:59:03 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:03:56.761 05:59:03 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:03:56.761 05:59:03 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:03:56.761 05:59:03 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:03:56.761 05:59:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:56.761 05:59:03 -- common/autotest_common.sh@10 -- # set +x 00:03:57.018 05:59:03 -- json_config/json_config.sh@376 -- # killprocess 993769 00:03:57.018 05:59:03 -- common/autotest_common.sh@926 -- # '[' -z 993769 ']' 00:03:57.018 05:59:03 -- common/autotest_common.sh@930 -- # kill -0 993769 00:03:57.018 05:59:03 -- common/autotest_common.sh@931 -- # uname 00:03:57.018 05:59:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:03:57.018 05:59:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 993769 00:03:57.018 05:59:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:03:57.018 05:59:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:03:57.018 05:59:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 993769' 00:03:57.018 killing process with pid 993769 00:03:57.018 05:59:03 -- common/autotest_common.sh@945 -- # kill 993769 00:03:57.018 05:59:03 -- common/autotest_common.sh@950 -- # wait 993769 00:03:58.918 05:59:04 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.918 05:59:04 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:03:58.918 05:59:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:58.918 05:59:04 -- common/autotest_common.sh@10 -- # set +x 00:03:58.918 05:59:05 -- json_config/json_config.sh@381 -- # return 0 00:03:58.918 05:59:05 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:03:58.918 INFO: Success 00:03:58.918 00:03:58.918 real 0m15.957s 00:03:58.918 user 0m18.029s 00:03:58.918 sys 0m2.222s 00:03:58.918 05:59:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.918 05:59:05 -- common/autotest_common.sh@10 -- # set +x 00:03:58.918 ************************************ 00:03:58.918 END TEST json_config 00:03:58.918 ************************************ 00:03:58.918 05:59:05 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:58.919 05:59:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:58.919 05:59:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:58.919 05:59:05 -- common/autotest_common.sh@10 -- # set +x 00:03:58.919 ************************************ 00:03:58.919 START TEST json_config_extra_key 00:03:58.919 ************************************ 00:03:58.919 05:59:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:58.919 05:59:05 -- nvmf/common.sh@7 -- # uname -s 00:03:58.919 05:59:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:58.919 05:59:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:58.919 05:59:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:58.919 05:59:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:58.919 05:59:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:58.919 05:59:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:58.919 05:59:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:58.919 05:59:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:58.919 05:59:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:58.919 05:59:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:58.919 05:59:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:58.919 05:59:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:58.919 05:59:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:58.919 05:59:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:58.919 05:59:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:58.919 05:59:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:58.919 05:59:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:58.919 05:59:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:58.919 05:59:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:58.919 05:59:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.919 05:59:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.919 05:59:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.919 05:59:05 -- paths/export.sh@5 -- # export PATH 00:03:58.919 05:59:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:58.919 05:59:05 -- nvmf/common.sh@46 -- # : 0 00:03:58.919 05:59:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:58.919 05:59:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:58.919 05:59:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:58.919 05:59:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:58.919 05:59:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:58.919 05:59:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:58.919 05:59:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:58.919 05:59:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:03:58.919 INFO: launching applications... 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@25 -- # shift 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=994712 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:03:58.919 Waiting for target to run... 00:03:58.919 05:59:05 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 994712 /var/tmp/spdk_tgt.sock 00:03:58.919 05:59:05 -- common/autotest_common.sh@819 -- # '[' -z 994712 ']' 00:03:58.919 05:59:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:58.919 05:59:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:58.919 05:59:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:58.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:58.919 05:59:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:58.919 05:59:05 -- common/autotest_common.sh@10 -- # set +x 00:03:58.919 [2024-07-13 05:59:05.118951] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:58.919 [2024-07-13 05:59:05.119036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid994712 ] 00:03:58.919 EAL: No free 2048 kB hugepages reported on node 1 00:03:59.177 [2024-07-13 05:59:05.466773] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.177 [2024-07-13 05:59:05.553133] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:59.177 [2024-07-13 05:59:05.553331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:59.742 05:59:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:59.742 05:59:06 -- common/autotest_common.sh@852 -- # return 0 00:03:59.742 05:59:06 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:03:59.742 00:03:59.742 05:59:06 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:03:59.742 INFO: shutting down applications... 00:03:59.742 05:59:06 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:03:59.743 05:59:06 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:03:59.743 05:59:06 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:03:59.743 05:59:06 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 994712 ]] 00:03:59.743 05:59:06 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 994712 00:03:59.743 05:59:06 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:03:59.743 05:59:06 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:03:59.743 05:59:06 -- json_config/json_config_extra_key.sh@50 -- # kill -0 994712 00:03:59.743 05:59:06 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:00.309 05:59:06 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:00.309 05:59:06 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:00.309 05:59:06 -- json_config/json_config_extra_key.sh@50 -- # kill -0 994712 00:04:00.309 05:59:06 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:00.309 05:59:06 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:00.309 05:59:06 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:00.309 05:59:06 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:00.309 SPDK target shutdown done 00:04:00.309 05:59:06 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:00.309 Success 00:04:00.309 00:04:00.309 real 0m1.554s 00:04:00.309 user 0m1.586s 00:04:00.309 sys 0m0.414s 00:04:00.309 05:59:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.309 05:59:06 -- common/autotest_common.sh@10 -- # set +x 00:04:00.309 ************************************ 00:04:00.309 END TEST json_config_extra_key 00:04:00.309 ************************************ 00:04:00.309 05:59:06 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:00.309 05:59:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:00.309 05:59:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:00.309 05:59:06 -- common/autotest_common.sh@10 -- # set +x 00:04:00.309 ************************************ 00:04:00.309 START TEST alias_rpc 00:04:00.309 ************************************ 00:04:00.309 05:59:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:00.309 * Looking for test storage... 00:04:00.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:00.309 05:59:06 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:00.309 05:59:06 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=995021 00:04:00.309 05:59:06 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:00.309 05:59:06 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 995021 00:04:00.309 05:59:06 -- common/autotest_common.sh@819 -- # '[' -z 995021 ']' 00:04:00.309 05:59:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:00.309 05:59:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:00.309 05:59:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:00.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:00.309 05:59:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:00.309 05:59:06 -- common/autotest_common.sh@10 -- # set +x 00:04:00.309 [2024-07-13 05:59:06.699390] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:00.309 [2024-07-13 05:59:06.699474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995021 ] 00:04:00.309 EAL: No free 2048 kB hugepages reported on node 1 00:04:00.309 [2024-07-13 05:59:06.758876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.568 [2024-07-13 05:59:06.865311] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:00.568 [2024-07-13 05:59:06.865479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.134 05:59:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:01.134 05:59:07 -- common/autotest_common.sh@852 -- # return 0 00:04:01.134 05:59:07 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:01.392 05:59:07 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 995021 00:04:01.392 05:59:07 -- common/autotest_common.sh@926 -- # '[' -z 995021 ']' 00:04:01.392 05:59:07 -- common/autotest_common.sh@930 -- # kill -0 995021 00:04:01.392 05:59:07 -- common/autotest_common.sh@931 -- # uname 00:04:01.392 05:59:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:01.392 05:59:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 995021 00:04:01.392 05:59:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:01.392 05:59:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:01.392 05:59:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 995021' 00:04:01.392 killing process with pid 995021 00:04:01.392 05:59:07 -- common/autotest_common.sh@945 -- # kill 995021 00:04:01.392 05:59:07 -- common/autotest_common.sh@950 -- # wait 995021 00:04:01.956 00:04:01.956 real 0m1.764s 00:04:01.956 user 0m2.012s 00:04:01.956 sys 0m0.444s 00:04:01.956 05:59:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.956 05:59:08 -- common/autotest_common.sh@10 -- # set +x 00:04:01.956 ************************************ 00:04:01.956 END TEST alias_rpc 00:04:01.956 ************************************ 00:04:01.956 05:59:08 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:01.956 05:59:08 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:01.956 05:59:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:01.956 05:59:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:01.956 05:59:08 -- common/autotest_common.sh@10 -- # set +x 00:04:01.956 ************************************ 00:04:01.956 START TEST spdkcli_tcp 00:04:01.956 ************************************ 00:04:01.956 05:59:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:01.956 * Looking for test storage... 00:04:01.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:01.957 05:59:08 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:01.957 05:59:08 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:01.957 05:59:08 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:01.957 05:59:08 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:01.957 05:59:08 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:01.957 05:59:08 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:01.957 05:59:08 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:01.957 05:59:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:01.957 05:59:08 -- common/autotest_common.sh@10 -- # set +x 00:04:01.957 05:59:08 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=995219 00:04:01.957 05:59:08 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:01.957 05:59:08 -- spdkcli/tcp.sh@27 -- # waitforlisten 995219 00:04:01.957 05:59:08 -- common/autotest_common.sh@819 -- # '[' -z 995219 ']' 00:04:01.957 05:59:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.957 05:59:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:01.957 05:59:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.957 05:59:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:01.957 05:59:08 -- common/autotest_common.sh@10 -- # set +x 00:04:02.215 [2024-07-13 05:59:08.497995] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:02.215 [2024-07-13 05:59:08.498070] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995219 ] 00:04:02.215 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.215 [2024-07-13 05:59:08.554945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:02.215 [2024-07-13 05:59:08.660242] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:02.215 [2024-07-13 05:59:08.660428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:02.215 [2024-07-13 05:59:08.660432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.149 05:59:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:03.149 05:59:09 -- common/autotest_common.sh@852 -- # return 0 00:04:03.149 05:59:09 -- spdkcli/tcp.sh@31 -- # socat_pid=995360 00:04:03.149 05:59:09 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:03.149 05:59:09 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:03.149 [ 00:04:03.149 "bdev_malloc_delete", 00:04:03.149 "bdev_malloc_create", 00:04:03.149 "bdev_null_resize", 00:04:03.149 "bdev_null_delete", 00:04:03.149 "bdev_null_create", 00:04:03.149 "bdev_nvme_cuse_unregister", 00:04:03.149 "bdev_nvme_cuse_register", 00:04:03.149 "bdev_opal_new_user", 00:04:03.149 "bdev_opal_set_lock_state", 00:04:03.149 "bdev_opal_delete", 00:04:03.149 "bdev_opal_get_info", 00:04:03.149 "bdev_opal_create", 00:04:03.149 "bdev_nvme_opal_revert", 00:04:03.149 "bdev_nvme_opal_init", 00:04:03.149 "bdev_nvme_send_cmd", 00:04:03.149 "bdev_nvme_get_path_iostat", 00:04:03.149 "bdev_nvme_get_mdns_discovery_info", 00:04:03.149 "bdev_nvme_stop_mdns_discovery", 00:04:03.149 "bdev_nvme_start_mdns_discovery", 00:04:03.149 "bdev_nvme_set_multipath_policy", 00:04:03.149 "bdev_nvme_set_preferred_path", 00:04:03.149 "bdev_nvme_get_io_paths", 00:04:03.149 "bdev_nvme_remove_error_injection", 00:04:03.149 "bdev_nvme_add_error_injection", 00:04:03.149 "bdev_nvme_get_discovery_info", 00:04:03.149 "bdev_nvme_stop_discovery", 00:04:03.149 "bdev_nvme_start_discovery", 00:04:03.149 "bdev_nvme_get_controller_health_info", 00:04:03.149 "bdev_nvme_disable_controller", 00:04:03.149 "bdev_nvme_enable_controller", 00:04:03.149 "bdev_nvme_reset_controller", 00:04:03.149 "bdev_nvme_get_transport_statistics", 00:04:03.149 "bdev_nvme_apply_firmware", 00:04:03.149 "bdev_nvme_detach_controller", 00:04:03.149 "bdev_nvme_get_controllers", 00:04:03.149 "bdev_nvme_attach_controller", 00:04:03.149 "bdev_nvme_set_hotplug", 00:04:03.149 "bdev_nvme_set_options", 00:04:03.149 "bdev_passthru_delete", 00:04:03.149 "bdev_passthru_create", 00:04:03.149 "bdev_lvol_grow_lvstore", 00:04:03.149 "bdev_lvol_get_lvols", 00:04:03.149 "bdev_lvol_get_lvstores", 00:04:03.149 "bdev_lvol_delete", 00:04:03.149 "bdev_lvol_set_read_only", 00:04:03.149 "bdev_lvol_resize", 00:04:03.149 "bdev_lvol_decouple_parent", 00:04:03.149 "bdev_lvol_inflate", 00:04:03.149 "bdev_lvol_rename", 00:04:03.149 "bdev_lvol_clone_bdev", 00:04:03.149 "bdev_lvol_clone", 00:04:03.149 "bdev_lvol_snapshot", 00:04:03.149 "bdev_lvol_create", 00:04:03.149 "bdev_lvol_delete_lvstore", 00:04:03.149 "bdev_lvol_rename_lvstore", 00:04:03.149 "bdev_lvol_create_lvstore", 00:04:03.149 "bdev_raid_set_options", 00:04:03.149 "bdev_raid_remove_base_bdev", 00:04:03.149 "bdev_raid_add_base_bdev", 00:04:03.149 "bdev_raid_delete", 00:04:03.149 "bdev_raid_create", 00:04:03.149 "bdev_raid_get_bdevs", 00:04:03.149 "bdev_error_inject_error", 00:04:03.149 "bdev_error_delete", 00:04:03.149 "bdev_error_create", 00:04:03.149 "bdev_split_delete", 00:04:03.149 "bdev_split_create", 00:04:03.150 "bdev_delay_delete", 00:04:03.150 "bdev_delay_create", 00:04:03.150 "bdev_delay_update_latency", 00:04:03.150 "bdev_zone_block_delete", 00:04:03.150 "bdev_zone_block_create", 00:04:03.150 "blobfs_create", 00:04:03.150 "blobfs_detect", 00:04:03.150 "blobfs_set_cache_size", 00:04:03.150 "bdev_aio_delete", 00:04:03.150 "bdev_aio_rescan", 00:04:03.150 "bdev_aio_create", 00:04:03.150 "bdev_ftl_set_property", 00:04:03.150 "bdev_ftl_get_properties", 00:04:03.150 "bdev_ftl_get_stats", 00:04:03.150 "bdev_ftl_unmap", 00:04:03.150 "bdev_ftl_unload", 00:04:03.150 "bdev_ftl_delete", 00:04:03.150 "bdev_ftl_load", 00:04:03.150 "bdev_ftl_create", 00:04:03.150 "bdev_virtio_attach_controller", 00:04:03.150 "bdev_virtio_scsi_get_devices", 00:04:03.150 "bdev_virtio_detach_controller", 00:04:03.150 "bdev_virtio_blk_set_hotplug", 00:04:03.150 "bdev_iscsi_delete", 00:04:03.150 "bdev_iscsi_create", 00:04:03.150 "bdev_iscsi_set_options", 00:04:03.150 "accel_error_inject_error", 00:04:03.150 "ioat_scan_accel_module", 00:04:03.150 "dsa_scan_accel_module", 00:04:03.150 "iaa_scan_accel_module", 00:04:03.150 "iscsi_set_options", 00:04:03.150 "iscsi_get_auth_groups", 00:04:03.150 "iscsi_auth_group_remove_secret", 00:04:03.150 "iscsi_auth_group_add_secret", 00:04:03.150 "iscsi_delete_auth_group", 00:04:03.150 "iscsi_create_auth_group", 00:04:03.150 "iscsi_set_discovery_auth", 00:04:03.150 "iscsi_get_options", 00:04:03.150 "iscsi_target_node_request_logout", 00:04:03.150 "iscsi_target_node_set_redirect", 00:04:03.150 "iscsi_target_node_set_auth", 00:04:03.150 "iscsi_target_node_add_lun", 00:04:03.150 "iscsi_get_connections", 00:04:03.150 "iscsi_portal_group_set_auth", 00:04:03.150 "iscsi_start_portal_group", 00:04:03.150 "iscsi_delete_portal_group", 00:04:03.150 "iscsi_create_portal_group", 00:04:03.150 "iscsi_get_portal_groups", 00:04:03.150 "iscsi_delete_target_node", 00:04:03.150 "iscsi_target_node_remove_pg_ig_maps", 00:04:03.150 "iscsi_target_node_add_pg_ig_maps", 00:04:03.150 "iscsi_create_target_node", 00:04:03.150 "iscsi_get_target_nodes", 00:04:03.150 "iscsi_delete_initiator_group", 00:04:03.150 "iscsi_initiator_group_remove_initiators", 00:04:03.150 "iscsi_initiator_group_add_initiators", 00:04:03.150 "iscsi_create_initiator_group", 00:04:03.150 "iscsi_get_initiator_groups", 00:04:03.150 "nvmf_set_crdt", 00:04:03.150 "nvmf_set_config", 00:04:03.150 "nvmf_set_max_subsystems", 00:04:03.150 "nvmf_subsystem_get_listeners", 00:04:03.150 "nvmf_subsystem_get_qpairs", 00:04:03.150 "nvmf_subsystem_get_controllers", 00:04:03.150 "nvmf_get_stats", 00:04:03.150 "nvmf_get_transports", 00:04:03.150 "nvmf_create_transport", 00:04:03.150 "nvmf_get_targets", 00:04:03.150 "nvmf_delete_target", 00:04:03.150 "nvmf_create_target", 00:04:03.150 "nvmf_subsystem_allow_any_host", 00:04:03.150 "nvmf_subsystem_remove_host", 00:04:03.150 "nvmf_subsystem_add_host", 00:04:03.150 "nvmf_subsystem_remove_ns", 00:04:03.150 "nvmf_subsystem_add_ns", 00:04:03.150 "nvmf_subsystem_listener_set_ana_state", 00:04:03.150 "nvmf_discovery_get_referrals", 00:04:03.150 "nvmf_discovery_remove_referral", 00:04:03.150 "nvmf_discovery_add_referral", 00:04:03.150 "nvmf_subsystem_remove_listener", 00:04:03.150 "nvmf_subsystem_add_listener", 00:04:03.150 "nvmf_delete_subsystem", 00:04:03.150 "nvmf_create_subsystem", 00:04:03.150 "nvmf_get_subsystems", 00:04:03.150 "env_dpdk_get_mem_stats", 00:04:03.150 "nbd_get_disks", 00:04:03.150 "nbd_stop_disk", 00:04:03.150 "nbd_start_disk", 00:04:03.150 "ublk_recover_disk", 00:04:03.150 "ublk_get_disks", 00:04:03.150 "ublk_stop_disk", 00:04:03.150 "ublk_start_disk", 00:04:03.150 "ublk_destroy_target", 00:04:03.150 "ublk_create_target", 00:04:03.150 "virtio_blk_create_transport", 00:04:03.150 "virtio_blk_get_transports", 00:04:03.150 "vhost_controller_set_coalescing", 00:04:03.150 "vhost_get_controllers", 00:04:03.150 "vhost_delete_controller", 00:04:03.150 "vhost_create_blk_controller", 00:04:03.150 "vhost_scsi_controller_remove_target", 00:04:03.150 "vhost_scsi_controller_add_target", 00:04:03.150 "vhost_start_scsi_controller", 00:04:03.150 "vhost_create_scsi_controller", 00:04:03.150 "thread_set_cpumask", 00:04:03.150 "framework_get_scheduler", 00:04:03.150 "framework_set_scheduler", 00:04:03.150 "framework_get_reactors", 00:04:03.150 "thread_get_io_channels", 00:04:03.150 "thread_get_pollers", 00:04:03.150 "thread_get_stats", 00:04:03.150 "framework_monitor_context_switch", 00:04:03.150 "spdk_kill_instance", 00:04:03.150 "log_enable_timestamps", 00:04:03.150 "log_get_flags", 00:04:03.150 "log_clear_flag", 00:04:03.150 "log_set_flag", 00:04:03.150 "log_get_level", 00:04:03.150 "log_set_level", 00:04:03.150 "log_get_print_level", 00:04:03.150 "log_set_print_level", 00:04:03.150 "framework_enable_cpumask_locks", 00:04:03.150 "framework_disable_cpumask_locks", 00:04:03.150 "framework_wait_init", 00:04:03.150 "framework_start_init", 00:04:03.150 "scsi_get_devices", 00:04:03.150 "bdev_get_histogram", 00:04:03.150 "bdev_enable_histogram", 00:04:03.150 "bdev_set_qos_limit", 00:04:03.150 "bdev_set_qd_sampling_period", 00:04:03.150 "bdev_get_bdevs", 00:04:03.150 "bdev_reset_iostat", 00:04:03.150 "bdev_get_iostat", 00:04:03.150 "bdev_examine", 00:04:03.150 "bdev_wait_for_examine", 00:04:03.150 "bdev_set_options", 00:04:03.150 "notify_get_notifications", 00:04:03.150 "notify_get_types", 00:04:03.150 "accel_get_stats", 00:04:03.150 "accel_set_options", 00:04:03.150 "accel_set_driver", 00:04:03.150 "accel_crypto_key_destroy", 00:04:03.150 "accel_crypto_keys_get", 00:04:03.150 "accel_crypto_key_create", 00:04:03.150 "accel_assign_opc", 00:04:03.150 "accel_get_module_info", 00:04:03.150 "accel_get_opc_assignments", 00:04:03.150 "vmd_rescan", 00:04:03.150 "vmd_remove_device", 00:04:03.150 "vmd_enable", 00:04:03.150 "sock_set_default_impl", 00:04:03.150 "sock_impl_set_options", 00:04:03.150 "sock_impl_get_options", 00:04:03.150 "iobuf_get_stats", 00:04:03.150 "iobuf_set_options", 00:04:03.150 "framework_get_pci_devices", 00:04:03.150 "framework_get_config", 00:04:03.150 "framework_get_subsystems", 00:04:03.150 "trace_get_info", 00:04:03.150 "trace_get_tpoint_group_mask", 00:04:03.150 "trace_disable_tpoint_group", 00:04:03.150 "trace_enable_tpoint_group", 00:04:03.150 "trace_clear_tpoint_mask", 00:04:03.150 "trace_set_tpoint_mask", 00:04:03.150 "spdk_get_version", 00:04:03.150 "rpc_get_methods" 00:04:03.150 ] 00:04:03.150 05:59:09 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:03.150 05:59:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:03.150 05:59:09 -- common/autotest_common.sh@10 -- # set +x 00:04:03.408 05:59:09 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:03.408 05:59:09 -- spdkcli/tcp.sh@38 -- # killprocess 995219 00:04:03.408 05:59:09 -- common/autotest_common.sh@926 -- # '[' -z 995219 ']' 00:04:03.408 05:59:09 -- common/autotest_common.sh@930 -- # kill -0 995219 00:04:03.408 05:59:09 -- common/autotest_common.sh@931 -- # uname 00:04:03.408 05:59:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:03.408 05:59:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 995219 00:04:03.408 05:59:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:03.408 05:59:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:03.408 05:59:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 995219' 00:04:03.408 killing process with pid 995219 00:04:03.408 05:59:09 -- common/autotest_common.sh@945 -- # kill 995219 00:04:03.408 05:59:09 -- common/autotest_common.sh@950 -- # wait 995219 00:04:03.667 00:04:03.667 real 0m1.754s 00:04:03.667 user 0m3.379s 00:04:03.667 sys 0m0.455s 00:04:03.667 05:59:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.667 05:59:10 -- common/autotest_common.sh@10 -- # set +x 00:04:03.667 ************************************ 00:04:03.667 END TEST spdkcli_tcp 00:04:03.667 ************************************ 00:04:03.667 05:59:10 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:03.667 05:59:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:03.667 05:59:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:03.667 05:59:10 -- common/autotest_common.sh@10 -- # set +x 00:04:03.667 ************************************ 00:04:03.667 START TEST dpdk_mem_utility 00:04:03.667 ************************************ 00:04:03.926 05:59:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:03.926 * Looking for test storage... 00:04:03.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:03.926 05:59:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:03.926 05:59:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=995556 00:04:03.926 05:59:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:03.926 05:59:10 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 995556 00:04:03.926 05:59:10 -- common/autotest_common.sh@819 -- # '[' -z 995556 ']' 00:04:03.926 05:59:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:03.926 05:59:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:03.926 05:59:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:03.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:03.926 05:59:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:03.926 05:59:10 -- common/autotest_common.sh@10 -- # set +x 00:04:03.926 [2024-07-13 05:59:10.273777] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:03.926 [2024-07-13 05:59:10.273882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995556 ] 00:04:03.926 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.926 [2024-07-13 05:59:10.329735] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.926 [2024-07-13 05:59:10.433177] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:03.926 [2024-07-13 05:59:10.433353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.859 05:59:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:04.859 05:59:11 -- common/autotest_common.sh@852 -- # return 0 00:04:04.859 05:59:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:04.859 05:59:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:04.859 05:59:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:04.859 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:04:04.859 { 00:04:04.859 "filename": "/tmp/spdk_mem_dump.txt" 00:04:04.859 } 00:04:04.859 05:59:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:04.859 05:59:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:04.859 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:04.859 1 heaps totaling size 814.000000 MiB 00:04:04.859 size: 814.000000 MiB heap id: 0 00:04:04.859 end heaps---------- 00:04:04.859 8 mempools totaling size 598.116089 MiB 00:04:04.859 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:04.859 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:04.859 size: 84.521057 MiB name: bdev_io_995556 00:04:04.859 size: 51.011292 MiB name: evtpool_995556 00:04:04.859 size: 50.003479 MiB name: msgpool_995556 00:04:04.859 size: 21.763794 MiB name: PDU_Pool 00:04:04.859 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:04.859 size: 0.026123 MiB name: Session_Pool 00:04:04.859 end mempools------- 00:04:04.859 6 memzones totaling size 4.142822 MiB 00:04:04.859 size: 1.000366 MiB name: RG_ring_0_995556 00:04:04.859 size: 1.000366 MiB name: RG_ring_1_995556 00:04:04.859 size: 1.000366 MiB name: RG_ring_4_995556 00:04:04.859 size: 1.000366 MiB name: RG_ring_5_995556 00:04:04.859 size: 0.125366 MiB name: RG_ring_2_995556 00:04:04.859 size: 0.015991 MiB name: RG_ring_3_995556 00:04:04.859 end memzones------- 00:04:04.859 05:59:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:04.859 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:04.859 list of free elements. size: 12.519348 MiB 00:04:04.859 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:04.859 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:04.859 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:04.859 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:04.859 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:04.859 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:04.859 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:04.859 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:04.859 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:04.859 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:04.859 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:04.859 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:04.859 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:04.859 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:04.859 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:04.859 list of standard malloc elements. size: 199.218079 MiB 00:04:04.859 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:04.859 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:04.859 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:04.859 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:04.859 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:04.859 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:04.859 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:04.859 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:04.859 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:04.859 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:04.859 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:04.859 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:04.859 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:04.859 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:04.859 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:04.859 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:04.859 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:04.859 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:04.859 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:04.859 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:04.859 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:04.859 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:04.859 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:04.859 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:04.859 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:04.859 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:04.859 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:04.859 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:04.859 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:04.859 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:04.859 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:04.859 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:04.859 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:04.859 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:04.859 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:04.859 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:04.859 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:04.859 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:04.859 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:04.859 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:04.859 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:04.859 list of memzone associated elements. size: 602.262573 MiB 00:04:04.859 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:04.859 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:04.859 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:04.859 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:04.859 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:04.859 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_995556_0 00:04:04.859 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:04.859 associated memzone info: size: 48.002930 MiB name: MP_evtpool_995556_0 00:04:04.859 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:04.859 associated memzone info: size: 48.002930 MiB name: MP_msgpool_995556_0 00:04:04.859 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:04.859 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:04.859 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:04.859 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:04.859 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:04.859 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_995556 00:04:04.859 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:04.859 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_995556 00:04:04.859 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:04.859 associated memzone info: size: 1.007996 MiB name: MP_evtpool_995556 00:04:04.859 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:04.860 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:04.860 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:04.860 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:04.860 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:04.860 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:04.860 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:04.860 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:04.860 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:04.860 associated memzone info: size: 1.000366 MiB name: RG_ring_0_995556 00:04:04.860 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:04.860 associated memzone info: size: 1.000366 MiB name: RG_ring_1_995556 00:04:04.860 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:04.860 associated memzone info: size: 1.000366 MiB name: RG_ring_4_995556 00:04:04.860 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:04.860 associated memzone info: size: 1.000366 MiB name: RG_ring_5_995556 00:04:04.860 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:04.860 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_995556 00:04:04.860 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:04.860 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:04.860 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:04.860 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:04.860 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:04.860 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:04.860 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:04.860 associated memzone info: size: 0.125366 MiB name: RG_ring_2_995556 00:04:04.860 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:04.860 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:04.860 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:04.860 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:04.860 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:04.860 associated memzone info: size: 0.015991 MiB name: RG_ring_3_995556 00:04:04.860 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:04.860 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:04.860 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:04.860 associated memzone info: size: 0.000183 MiB name: MP_msgpool_995556 00:04:04.860 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:04.860 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_995556 00:04:04.860 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:04.860 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:04.860 05:59:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:04.860 05:59:11 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 995556 00:04:04.860 05:59:11 -- common/autotest_common.sh@926 -- # '[' -z 995556 ']' 00:04:04.860 05:59:11 -- common/autotest_common.sh@930 -- # kill -0 995556 00:04:04.860 05:59:11 -- common/autotest_common.sh@931 -- # uname 00:04:04.860 05:59:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:04.860 05:59:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 995556 00:04:04.860 05:59:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:04.860 05:59:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:04.860 05:59:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 995556' 00:04:04.860 killing process with pid 995556 00:04:04.860 05:59:11 -- common/autotest_common.sh@945 -- # kill 995556 00:04:04.860 05:59:11 -- common/autotest_common.sh@950 -- # wait 995556 00:04:05.425 00:04:05.425 real 0m1.638s 00:04:05.425 user 0m1.800s 00:04:05.425 sys 0m0.423s 00:04:05.425 05:59:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.425 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:04:05.425 ************************************ 00:04:05.425 END TEST dpdk_mem_utility 00:04:05.425 ************************************ 00:04:05.425 05:59:11 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:05.425 05:59:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:05.425 05:59:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:05.425 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:04:05.425 ************************************ 00:04:05.425 START TEST event 00:04:05.425 ************************************ 00:04:05.425 05:59:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:05.425 * Looking for test storage... 00:04:05.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:05.425 05:59:11 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:05.425 05:59:11 -- bdev/nbd_common.sh@6 -- # set -e 00:04:05.425 05:59:11 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:05.425 05:59:11 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:05.425 05:59:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:05.425 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:04:05.425 ************************************ 00:04:05.425 START TEST event_perf 00:04:05.425 ************************************ 00:04:05.425 05:59:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:05.425 Running I/O for 1 seconds...[2024-07-13 05:59:11.911055] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:05.425 [2024-07-13 05:59:11.911138] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995754 ] 00:04:05.682 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.682 [2024-07-13 05:59:11.979713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:05.682 [2024-07-13 05:59:12.100534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:05.682 [2024-07-13 05:59:12.100559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:05.682 [2024-07-13 05:59:12.100617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:05.682 [2024-07-13 05:59:12.100620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.050 Running I/O for 1 seconds... 00:04:07.050 lcore 0: 233485 00:04:07.050 lcore 1: 233484 00:04:07.050 lcore 2: 233484 00:04:07.050 lcore 3: 233485 00:04:07.050 done. 00:04:07.050 00:04:07.050 real 0m1.330s 00:04:07.050 user 0m4.231s 00:04:07.050 sys 0m0.092s 00:04:07.050 05:59:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.050 05:59:13 -- common/autotest_common.sh@10 -- # set +x 00:04:07.050 ************************************ 00:04:07.050 END TEST event_perf 00:04:07.050 ************************************ 00:04:07.050 05:59:13 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:07.050 05:59:13 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:07.050 05:59:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:07.050 05:59:13 -- common/autotest_common.sh@10 -- # set +x 00:04:07.050 ************************************ 00:04:07.050 START TEST event_reactor 00:04:07.050 ************************************ 00:04:07.050 05:59:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:07.050 [2024-07-13 05:59:13.269640] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:07.050 [2024-07-13 05:59:13.269722] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid995924 ] 00:04:07.050 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.050 [2024-07-13 05:59:13.332390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.050 [2024-07-13 05:59:13.451692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.421 test_start 00:04:08.421 oneshot 00:04:08.421 tick 100 00:04:08.421 tick 100 00:04:08.421 tick 250 00:04:08.421 tick 100 00:04:08.421 tick 100 00:04:08.421 tick 250 00:04:08.421 tick 500 00:04:08.421 tick 100 00:04:08.421 tick 100 00:04:08.421 tick 100 00:04:08.421 tick 250 00:04:08.421 tick 100 00:04:08.421 tick 100 00:04:08.421 test_end 00:04:08.421 00:04:08.421 real 0m1.315s 00:04:08.421 user 0m1.230s 00:04:08.421 sys 0m0.080s 00:04:08.421 05:59:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.421 05:59:14 -- common/autotest_common.sh@10 -- # set +x 00:04:08.421 ************************************ 00:04:08.421 END TEST event_reactor 00:04:08.421 ************************************ 00:04:08.421 05:59:14 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:08.421 05:59:14 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:08.421 05:59:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:08.421 05:59:14 -- common/autotest_common.sh@10 -- # set +x 00:04:08.421 ************************************ 00:04:08.421 START TEST event_reactor_perf 00:04:08.421 ************************************ 00:04:08.421 05:59:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:08.421 [2024-07-13 05:59:14.608760] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:08.421 [2024-07-13 05:59:14.608839] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996190 ] 00:04:08.421 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.421 [2024-07-13 05:59:14.674009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.421 [2024-07-13 05:59:14.790437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.832 test_start 00:04:09.832 test_end 00:04:09.832 Performance: 350513 events per second 00:04:09.832 00:04:09.832 real 0m1.317s 00:04:09.832 user 0m1.228s 00:04:09.832 sys 0m0.083s 00:04:09.832 05:59:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.832 05:59:15 -- common/autotest_common.sh@10 -- # set +x 00:04:09.832 ************************************ 00:04:09.832 END TEST event_reactor_perf 00:04:09.832 ************************************ 00:04:09.832 05:59:15 -- event/event.sh@49 -- # uname -s 00:04:09.832 05:59:15 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:09.832 05:59:15 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:09.832 05:59:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.832 05:59:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.832 05:59:15 -- common/autotest_common.sh@10 -- # set +x 00:04:09.832 ************************************ 00:04:09.832 START TEST event_scheduler 00:04:09.832 ************************************ 00:04:09.832 05:59:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:09.832 * Looking for test storage... 00:04:09.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:09.832 05:59:15 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:09.832 05:59:15 -- scheduler/scheduler.sh@35 -- # scheduler_pid=996381 00:04:09.832 05:59:15 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:09.832 05:59:15 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.832 05:59:15 -- scheduler/scheduler.sh@37 -- # waitforlisten 996381 00:04:09.832 05:59:15 -- common/autotest_common.sh@819 -- # '[' -z 996381 ']' 00:04:09.832 05:59:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.832 05:59:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:09.832 05:59:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.832 05:59:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:09.832 05:59:15 -- common/autotest_common.sh@10 -- # set +x 00:04:09.832 [2024-07-13 05:59:16.026373] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:09.832 [2024-07-13 05:59:16.026456] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996381 ] 00:04:09.832 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.832 [2024-07-13 05:59:16.083505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:09.832 [2024-07-13 05:59:16.190167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.832 [2024-07-13 05:59:16.190225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:09.833 [2024-07-13 05:59:16.190292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:09.833 [2024-07-13 05:59:16.190296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:09.833 05:59:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:09.833 05:59:16 -- common/autotest_common.sh@852 -- # return 0 00:04:09.833 05:59:16 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:09.833 05:59:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.833 05:59:16 -- common/autotest_common.sh@10 -- # set +x 00:04:09.833 POWER: Env isn't set yet! 00:04:09.833 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:09.833 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:09.833 POWER: Cannot get available frequencies of lcore 0 00:04:09.833 POWER: Attempting to initialise PSTAT power management... 00:04:09.833 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:09.833 POWER: Initialized successfully for lcore 0 power management 00:04:09.833 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:09.833 POWER: Initialized successfully for lcore 1 power management 00:04:09.833 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:09.833 POWER: Initialized successfully for lcore 2 power management 00:04:09.833 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:09.833 POWER: Initialized successfully for lcore 3 power management 00:04:09.833 [2024-07-13 05:59:16.251072] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:09.833 [2024-07-13 05:59:16.251090] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:09.833 [2024-07-13 05:59:16.251101] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:09.833 05:59:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:09.833 05:59:16 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:09.833 05:59:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:09.833 05:59:16 -- common/autotest_common.sh@10 -- # set +x 00:04:10.092 [2024-07-13 05:59:16.356132] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:10.092 05:59:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.092 05:59:16 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:10.092 05:59:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:10.092 05:59:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:10.092 05:59:16 -- common/autotest_common.sh@10 -- # set +x 00:04:10.092 ************************************ 00:04:10.092 START TEST scheduler_create_thread 00:04:10.092 ************************************ 00:04:10.092 05:59:16 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:04:10.092 05:59:16 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:10.092 05:59:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.092 05:59:16 -- common/autotest_common.sh@10 -- # set +x 00:04:10.092 2 00:04:10.092 05:59:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.092 05:59:16 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:10.092 05:59:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.092 05:59:16 -- common/autotest_common.sh@10 -- # set +x 00:04:10.092 3 00:04:10.092 05:59:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.092 05:59:16 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:10.092 05:59:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.092 05:59:16 -- common/autotest_common.sh@10 -- # set +x 00:04:10.092 4 00:04:10.092 05:59:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.092 05:59:16 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:10.092 05:59:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.092 05:59:16 -- common/autotest_common.sh@10 -- # set +x 00:04:10.092 5 00:04:10.092 05:59:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.092 05:59:16 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:10.092 05:59:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.092 05:59:16 -- common/autotest_common.sh@10 -- # set +x 00:04:10.092 6 00:04:10.092 05:59:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.092 05:59:16 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:10.092 05:59:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.092 05:59:16 -- common/autotest_common.sh@10 -- # set +x 00:04:10.092 7 00:04:10.092 05:59:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.092 05:59:16 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:10.092 05:59:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.092 05:59:16 -- common/autotest_common.sh@10 -- # set +x 00:04:10.092 8 00:04:10.092 05:59:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.092 05:59:16 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:10.092 05:59:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.092 05:59:16 -- common/autotest_common.sh@10 -- # set +x 00:04:10.092 9 00:04:10.092 05:59:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.092 05:59:16 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:10.092 05:59:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.092 05:59:16 -- common/autotest_common.sh@10 -- # set +x 00:04:10.092 10 00:04:10.092 05:59:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.092 05:59:16 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:10.092 05:59:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.092 05:59:16 -- common/autotest_common.sh@10 -- # set +x 00:04:10.092 05:59:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.092 05:59:16 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:10.092 05:59:16 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:10.092 05:59:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.092 05:59:16 -- common/autotest_common.sh@10 -- # set +x 00:04:10.092 05:59:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:10.092 05:59:16 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:10.092 05:59:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:10.092 05:59:16 -- common/autotest_common.sh@10 -- # set +x 00:04:11.466 05:59:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:11.466 05:59:17 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:11.466 05:59:17 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:11.466 05:59:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:11.466 05:59:17 -- common/autotest_common.sh@10 -- # set +x 00:04:12.838 05:59:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:12.838 00:04:12.838 real 0m2.616s 00:04:12.838 user 0m0.012s 00:04:12.838 sys 0m0.002s 00:04:12.838 05:59:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.838 05:59:18 -- common/autotest_common.sh@10 -- # set +x 00:04:12.838 ************************************ 00:04:12.838 END TEST scheduler_create_thread 00:04:12.838 ************************************ 00:04:12.838 05:59:18 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:12.838 05:59:18 -- scheduler/scheduler.sh@46 -- # killprocess 996381 00:04:12.838 05:59:18 -- common/autotest_common.sh@926 -- # '[' -z 996381 ']' 00:04:12.838 05:59:18 -- common/autotest_common.sh@930 -- # kill -0 996381 00:04:12.838 05:59:18 -- common/autotest_common.sh@931 -- # uname 00:04:12.838 05:59:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:12.838 05:59:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 996381 00:04:12.838 05:59:19 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:04:12.838 05:59:19 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:04:12.838 05:59:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 996381' 00:04:12.838 killing process with pid 996381 00:04:12.838 05:59:19 -- common/autotest_common.sh@945 -- # kill 996381 00:04:12.838 05:59:19 -- common/autotest_common.sh@950 -- # wait 996381 00:04:13.096 [2024-07-13 05:59:19.459116] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:13.354 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:13.354 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:13.354 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:13.354 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:13.354 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:13.354 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:13.354 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:13.354 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:13.354 00:04:13.354 real 0m3.791s 00:04:13.354 user 0m5.661s 00:04:13.354 sys 0m0.291s 00:04:13.354 05:59:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.354 05:59:19 -- common/autotest_common.sh@10 -- # set +x 00:04:13.354 ************************************ 00:04:13.354 END TEST event_scheduler 00:04:13.354 ************************************ 00:04:13.354 05:59:19 -- event/event.sh@51 -- # modprobe -n nbd 00:04:13.354 05:59:19 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:13.354 05:59:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:13.354 05:59:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:13.354 05:59:19 -- common/autotest_common.sh@10 -- # set +x 00:04:13.354 ************************************ 00:04:13.354 START TEST app_repeat 00:04:13.354 ************************************ 00:04:13.354 05:59:19 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:04:13.354 05:59:19 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.354 05:59:19 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.354 05:59:19 -- event/event.sh@13 -- # local nbd_list 00:04:13.354 05:59:19 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:13.354 05:59:19 -- event/event.sh@14 -- # local bdev_list 00:04:13.354 05:59:19 -- event/event.sh@15 -- # local repeat_times=4 00:04:13.354 05:59:19 -- event/event.sh@17 -- # modprobe nbd 00:04:13.354 05:59:19 -- event/event.sh@19 -- # repeat_pid=996844 00:04:13.354 05:59:19 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:13.354 05:59:19 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:13.354 05:59:19 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 996844' 00:04:13.354 Process app_repeat pid: 996844 00:04:13.354 05:59:19 -- event/event.sh@23 -- # for i in {0..2} 00:04:13.354 05:59:19 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:13.354 spdk_app_start Round 0 00:04:13.354 05:59:19 -- event/event.sh@25 -- # waitforlisten 996844 /var/tmp/spdk-nbd.sock 00:04:13.354 05:59:19 -- common/autotest_common.sh@819 -- # '[' -z 996844 ']' 00:04:13.354 05:59:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:13.354 05:59:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:13.354 05:59:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:13.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:13.354 05:59:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:13.354 05:59:19 -- common/autotest_common.sh@10 -- # set +x 00:04:13.354 [2024-07-13 05:59:19.788808] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:13.354 [2024-07-13 05:59:19.788906] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid996844 ] 00:04:13.354 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.354 [2024-07-13 05:59:19.853008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:13.612 [2024-07-13 05:59:19.967031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.612 [2024-07-13 05:59:19.967037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.546 05:59:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:14.546 05:59:20 -- common/autotest_common.sh@852 -- # return 0 00:04:14.546 05:59:20 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:14.546 Malloc0 00:04:14.546 05:59:20 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:14.804 Malloc1 00:04:14.804 05:59:21 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:14.804 05:59:21 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.804 05:59:21 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:14.804 05:59:21 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:14.804 05:59:21 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.804 05:59:21 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:14.804 05:59:21 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:14.804 05:59:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:14.804 05:59:21 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:14.804 05:59:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:14.804 05:59:21 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:14.804 05:59:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:14.804 05:59:21 -- bdev/nbd_common.sh@12 -- # local i 00:04:14.804 05:59:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:14.804 05:59:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:14.804 05:59:21 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:15.062 /dev/nbd0 00:04:15.062 05:59:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:15.062 05:59:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:15.062 05:59:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:15.062 05:59:21 -- common/autotest_common.sh@857 -- # local i 00:04:15.062 05:59:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:15.062 05:59:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:15.062 05:59:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:15.062 05:59:21 -- common/autotest_common.sh@861 -- # break 00:04:15.062 05:59:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:15.062 05:59:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:15.062 05:59:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:15.062 1+0 records in 00:04:15.062 1+0 records out 00:04:15.062 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000145347 s, 28.2 MB/s 00:04:15.062 05:59:21 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.062 05:59:21 -- common/autotest_common.sh@874 -- # size=4096 00:04:15.062 05:59:21 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.062 05:59:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:15.062 05:59:21 -- common/autotest_common.sh@877 -- # return 0 00:04:15.062 05:59:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:15.062 05:59:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:15.062 05:59:21 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:15.344 /dev/nbd1 00:04:15.344 05:59:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:15.344 05:59:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:15.344 05:59:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:15.344 05:59:21 -- common/autotest_common.sh@857 -- # local i 00:04:15.344 05:59:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:15.344 05:59:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:15.344 05:59:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:15.344 05:59:21 -- common/autotest_common.sh@861 -- # break 00:04:15.344 05:59:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:15.344 05:59:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:15.344 05:59:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:15.344 1+0 records in 00:04:15.344 1+0 records out 00:04:15.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197488 s, 20.7 MB/s 00:04:15.344 05:59:21 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.344 05:59:21 -- common/autotest_common.sh@874 -- # size=4096 00:04:15.344 05:59:21 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:15.344 05:59:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:15.344 05:59:21 -- common/autotest_common.sh@877 -- # return 0 00:04:15.344 05:59:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:15.344 05:59:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:15.344 05:59:21 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:15.344 05:59:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.344 05:59:21 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:15.601 05:59:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:15.601 { 00:04:15.601 "nbd_device": "/dev/nbd0", 00:04:15.601 "bdev_name": "Malloc0" 00:04:15.601 }, 00:04:15.602 { 00:04:15.602 "nbd_device": "/dev/nbd1", 00:04:15.602 "bdev_name": "Malloc1" 00:04:15.602 } 00:04:15.602 ]' 00:04:15.602 05:59:22 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:15.602 { 00:04:15.602 "nbd_device": "/dev/nbd0", 00:04:15.602 "bdev_name": "Malloc0" 00:04:15.602 }, 00:04:15.602 { 00:04:15.602 "nbd_device": "/dev/nbd1", 00:04:15.602 "bdev_name": "Malloc1" 00:04:15.602 } 00:04:15.602 ]' 00:04:15.602 05:59:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:15.602 05:59:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:15.602 /dev/nbd1' 00:04:15.602 05:59:22 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:15.602 /dev/nbd1' 00:04:15.602 05:59:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:15.602 05:59:22 -- bdev/nbd_common.sh@65 -- # count=2 00:04:15.602 05:59:22 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:15.602 05:59:22 -- bdev/nbd_common.sh@95 -- # count=2 00:04:15.602 05:59:22 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:15.602 05:59:22 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:15.602 05:59:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.602 05:59:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:15.602 05:59:22 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:15.602 05:59:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.602 05:59:22 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:15.602 05:59:22 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:15.602 256+0 records in 00:04:15.602 256+0 records out 00:04:15.602 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0053157 s, 197 MB/s 00:04:15.602 05:59:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:15.602 05:59:22 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:15.859 256+0 records in 00:04:15.859 256+0 records out 00:04:15.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238563 s, 44.0 MB/s 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:15.859 256+0 records in 00:04:15.859 256+0 records out 00:04:15.859 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254089 s, 41.3 MB/s 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@51 -- # local i 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:15.859 05:59:22 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:16.117 05:59:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:16.117 05:59:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:16.117 05:59:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:16.117 05:59:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:16.117 05:59:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:16.117 05:59:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:16.117 05:59:22 -- bdev/nbd_common.sh@41 -- # break 00:04:16.117 05:59:22 -- bdev/nbd_common.sh@45 -- # return 0 00:04:16.117 05:59:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:16.117 05:59:22 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:16.374 05:59:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:16.374 05:59:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:16.374 05:59:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:16.374 05:59:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:16.374 05:59:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:16.374 05:59:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:16.374 05:59:22 -- bdev/nbd_common.sh@41 -- # break 00:04:16.374 05:59:22 -- bdev/nbd_common.sh@45 -- # return 0 00:04:16.374 05:59:22 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:16.374 05:59:22 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.374 05:59:22 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:16.631 05:59:22 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:16.631 05:59:22 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:16.631 05:59:22 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:16.631 05:59:22 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:16.631 05:59:22 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:16.631 05:59:22 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:16.631 05:59:22 -- bdev/nbd_common.sh@65 -- # true 00:04:16.631 05:59:22 -- bdev/nbd_common.sh@65 -- # count=0 00:04:16.631 05:59:22 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:16.631 05:59:22 -- bdev/nbd_common.sh@104 -- # count=0 00:04:16.631 05:59:22 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:16.631 05:59:22 -- bdev/nbd_common.sh@109 -- # return 0 00:04:16.631 05:59:22 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:16.888 05:59:23 -- event/event.sh@35 -- # sleep 3 00:04:17.146 [2024-07-13 05:59:23.531137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:17.146 [2024-07-13 05:59:23.644026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.146 [2024-07-13 05:59:23.644026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.404 [2024-07-13 05:59:23.706173] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:17.404 [2024-07-13 05:59:23.706257] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:19.928 05:59:26 -- event/event.sh@23 -- # for i in {0..2} 00:04:19.928 05:59:26 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:19.928 spdk_app_start Round 1 00:04:19.928 05:59:26 -- event/event.sh@25 -- # waitforlisten 996844 /var/tmp/spdk-nbd.sock 00:04:19.928 05:59:26 -- common/autotest_common.sh@819 -- # '[' -z 996844 ']' 00:04:19.928 05:59:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:19.928 05:59:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:19.928 05:59:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:19.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:19.928 05:59:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:19.928 05:59:26 -- common/autotest_common.sh@10 -- # set +x 00:04:20.186 05:59:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:20.186 05:59:26 -- common/autotest_common.sh@852 -- # return 0 00:04:20.186 05:59:26 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:20.444 Malloc0 00:04:20.444 05:59:26 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:20.702 Malloc1 00:04:20.702 05:59:27 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:20.702 05:59:27 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.702 05:59:27 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:20.702 05:59:27 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:20.702 05:59:27 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.702 05:59:27 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:20.702 05:59:27 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:20.702 05:59:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.702 05:59:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:20.702 05:59:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:20.702 05:59:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:20.702 05:59:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:20.702 05:59:27 -- bdev/nbd_common.sh@12 -- # local i 00:04:20.702 05:59:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:20.702 05:59:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.702 05:59:27 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:20.960 /dev/nbd0 00:04:20.960 05:59:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:20.960 05:59:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:20.960 05:59:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:20.960 05:59:27 -- common/autotest_common.sh@857 -- # local i 00:04:20.960 05:59:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:20.960 05:59:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:20.960 05:59:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:20.960 05:59:27 -- common/autotest_common.sh@861 -- # break 00:04:20.960 05:59:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:20.960 05:59:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:20.960 05:59:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:20.960 1+0 records in 00:04:20.960 1+0 records out 00:04:20.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019904 s, 20.6 MB/s 00:04:20.960 05:59:27 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.960 05:59:27 -- common/autotest_common.sh@874 -- # size=4096 00:04:20.960 05:59:27 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:20.960 05:59:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:20.960 05:59:27 -- common/autotest_common.sh@877 -- # return 0 00:04:20.960 05:59:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:20.960 05:59:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:20.960 05:59:27 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:21.218 /dev/nbd1 00:04:21.218 05:59:27 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:21.218 05:59:27 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:21.218 05:59:27 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:21.218 05:59:27 -- common/autotest_common.sh@857 -- # local i 00:04:21.218 05:59:27 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:21.218 05:59:27 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:21.218 05:59:27 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:21.218 05:59:27 -- common/autotest_common.sh@861 -- # break 00:04:21.218 05:59:27 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:21.218 05:59:27 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:21.218 05:59:27 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:21.218 1+0 records in 00:04:21.218 1+0 records out 00:04:21.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250566 s, 16.3 MB/s 00:04:21.218 05:59:27 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:21.218 05:59:27 -- common/autotest_common.sh@874 -- # size=4096 00:04:21.218 05:59:27 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:21.218 05:59:27 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:21.218 05:59:27 -- common/autotest_common.sh@877 -- # return 0 00:04:21.218 05:59:27 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:21.218 05:59:27 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:21.218 05:59:27 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:21.218 05:59:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.218 05:59:27 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:21.477 { 00:04:21.477 "nbd_device": "/dev/nbd0", 00:04:21.477 "bdev_name": "Malloc0" 00:04:21.477 }, 00:04:21.477 { 00:04:21.477 "nbd_device": "/dev/nbd1", 00:04:21.477 "bdev_name": "Malloc1" 00:04:21.477 } 00:04:21.477 ]' 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:21.477 { 00:04:21.477 "nbd_device": "/dev/nbd0", 00:04:21.477 "bdev_name": "Malloc0" 00:04:21.477 }, 00:04:21.477 { 00:04:21.477 "nbd_device": "/dev/nbd1", 00:04:21.477 "bdev_name": "Malloc1" 00:04:21.477 } 00:04:21.477 ]' 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:21.477 /dev/nbd1' 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:21.477 /dev/nbd1' 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@65 -- # count=2 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@95 -- # count=2 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:21.477 256+0 records in 00:04:21.477 256+0 records out 00:04:21.477 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503968 s, 208 MB/s 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:21.477 256+0 records in 00:04:21.477 256+0 records out 00:04:21.477 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0213939 s, 49.0 MB/s 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:21.477 256+0 records in 00:04:21.477 256+0 records out 00:04:21.477 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244543 s, 42.9 MB/s 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@51 -- # local i 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:21.477 05:59:27 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:21.735 05:59:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:21.735 05:59:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:21.735 05:59:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:21.735 05:59:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:21.735 05:59:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:21.735 05:59:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:21.735 05:59:28 -- bdev/nbd_common.sh@41 -- # break 00:04:21.735 05:59:28 -- bdev/nbd_common.sh@45 -- # return 0 00:04:21.735 05:59:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:21.735 05:59:28 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:21.994 05:59:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:21.994 05:59:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:21.994 05:59:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:21.994 05:59:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:21.994 05:59:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:21.994 05:59:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:21.994 05:59:28 -- bdev/nbd_common.sh@41 -- # break 00:04:21.994 05:59:28 -- bdev/nbd_common.sh@45 -- # return 0 00:04:21.994 05:59:28 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:21.994 05:59:28 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:21.994 05:59:28 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:22.252 05:59:28 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:22.252 05:59:28 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:22.252 05:59:28 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:22.252 05:59:28 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:22.252 05:59:28 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:22.252 05:59:28 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:22.252 05:59:28 -- bdev/nbd_common.sh@65 -- # true 00:04:22.252 05:59:28 -- bdev/nbd_common.sh@65 -- # count=0 00:04:22.252 05:59:28 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:22.252 05:59:28 -- bdev/nbd_common.sh@104 -- # count=0 00:04:22.252 05:59:28 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:22.252 05:59:28 -- bdev/nbd_common.sh@109 -- # return 0 00:04:22.252 05:59:28 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:22.515 05:59:28 -- event/event.sh@35 -- # sleep 3 00:04:22.773 [2024-07-13 05:59:29.250244] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:23.031 [2024-07-13 05:59:29.365167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:23.031 [2024-07-13 05:59:29.365172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.031 [2024-07-13 05:59:29.426761] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:23.031 [2024-07-13 05:59:29.426845] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:25.555 05:59:31 -- event/event.sh@23 -- # for i in {0..2} 00:04:25.555 05:59:31 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:25.555 spdk_app_start Round 2 00:04:25.555 05:59:31 -- event/event.sh@25 -- # waitforlisten 996844 /var/tmp/spdk-nbd.sock 00:04:25.555 05:59:31 -- common/autotest_common.sh@819 -- # '[' -z 996844 ']' 00:04:25.555 05:59:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:25.555 05:59:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:25.555 05:59:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:25.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:25.555 05:59:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:25.555 05:59:31 -- common/autotest_common.sh@10 -- # set +x 00:04:25.811 05:59:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:25.811 05:59:32 -- common/autotest_common.sh@852 -- # return 0 00:04:25.811 05:59:32 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:26.068 Malloc0 00:04:26.068 05:59:32 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:26.326 Malloc1 00:04:26.326 05:59:32 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:26.326 05:59:32 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.326 05:59:32 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:26.326 05:59:32 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:26.326 05:59:32 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.326 05:59:32 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:26.326 05:59:32 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:26.326 05:59:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.326 05:59:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:26.326 05:59:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:26.326 05:59:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.326 05:59:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:26.326 05:59:32 -- bdev/nbd_common.sh@12 -- # local i 00:04:26.326 05:59:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:26.326 05:59:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.326 05:59:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:26.583 /dev/nbd0 00:04:26.583 05:59:32 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:26.583 05:59:32 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:26.583 05:59:32 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:26.583 05:59:32 -- common/autotest_common.sh@857 -- # local i 00:04:26.583 05:59:32 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:26.583 05:59:32 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:26.583 05:59:32 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:26.583 05:59:32 -- common/autotest_common.sh@861 -- # break 00:04:26.583 05:59:32 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:26.583 05:59:32 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:26.583 05:59:32 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.583 1+0 records in 00:04:26.583 1+0 records out 00:04:26.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197754 s, 20.7 MB/s 00:04:26.583 05:59:32 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.583 05:59:32 -- common/autotest_common.sh@874 -- # size=4096 00:04:26.583 05:59:32 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.583 05:59:32 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:26.583 05:59:32 -- common/autotest_common.sh@877 -- # return 0 00:04:26.583 05:59:32 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.583 05:59:32 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.583 05:59:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:26.841 /dev/nbd1 00:04:26.841 05:59:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:26.841 05:59:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:26.841 05:59:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:26.841 05:59:33 -- common/autotest_common.sh@857 -- # local i 00:04:26.841 05:59:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:26.841 05:59:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:26.841 05:59:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:26.841 05:59:33 -- common/autotest_common.sh@861 -- # break 00:04:26.841 05:59:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:26.841 05:59:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:26.841 05:59:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.841 1+0 records in 00:04:26.841 1+0 records out 00:04:26.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199516 s, 20.5 MB/s 00:04:26.841 05:59:33 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.841 05:59:33 -- common/autotest_common.sh@874 -- # size=4096 00:04:26.841 05:59:33 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.841 05:59:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:26.841 05:59:33 -- common/autotest_common.sh@877 -- # return 0 00:04:26.841 05:59:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.841 05:59:33 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.841 05:59:33 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.841 05:59:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.841 05:59:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:27.099 { 00:04:27.099 "nbd_device": "/dev/nbd0", 00:04:27.099 "bdev_name": "Malloc0" 00:04:27.099 }, 00:04:27.099 { 00:04:27.099 "nbd_device": "/dev/nbd1", 00:04:27.099 "bdev_name": "Malloc1" 00:04:27.099 } 00:04:27.099 ]' 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:27.099 { 00:04:27.099 "nbd_device": "/dev/nbd0", 00:04:27.099 "bdev_name": "Malloc0" 00:04:27.099 }, 00:04:27.099 { 00:04:27.099 "nbd_device": "/dev/nbd1", 00:04:27.099 "bdev_name": "Malloc1" 00:04:27.099 } 00:04:27.099 ]' 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:27.099 /dev/nbd1' 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:27.099 /dev/nbd1' 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@65 -- # count=2 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@95 -- # count=2 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:27.099 256+0 records in 00:04:27.099 256+0 records out 00:04:27.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00402719 s, 260 MB/s 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:27.099 256+0 records in 00:04:27.099 256+0 records out 00:04:27.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246348 s, 42.6 MB/s 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:27.099 256+0 records in 00:04:27.099 256+0 records out 00:04:27.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249846 s, 42.0 MB/s 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:27.099 05:59:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:27.356 05:59:33 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:27.356 05:59:33 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:27.356 05:59:33 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:27.356 05:59:33 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:27.356 05:59:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.356 05:59:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:27.356 05:59:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:27.356 05:59:33 -- bdev/nbd_common.sh@51 -- # local i 00:04:27.356 05:59:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:27.356 05:59:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:27.613 05:59:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:27.613 05:59:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:27.613 05:59:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:27.613 05:59:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.613 05:59:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.613 05:59:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:27.613 05:59:33 -- bdev/nbd_common.sh@41 -- # break 00:04:27.613 05:59:33 -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.613 05:59:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:27.613 05:59:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:27.871 05:59:34 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:27.871 05:59:34 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:27.871 05:59:34 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:27.871 05:59:34 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.871 05:59:34 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.871 05:59:34 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:27.871 05:59:34 -- bdev/nbd_common.sh@41 -- # break 00:04:27.871 05:59:34 -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.871 05:59:34 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.871 05:59:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.871 05:59:34 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.871 05:59:34 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:27.871 05:59:34 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:27.871 05:59:34 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:28.129 05:59:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:28.129 05:59:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:28.129 05:59:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:28.129 05:59:34 -- bdev/nbd_common.sh@65 -- # true 00:04:28.129 05:59:34 -- bdev/nbd_common.sh@65 -- # count=0 00:04:28.129 05:59:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:28.129 05:59:34 -- bdev/nbd_common.sh@104 -- # count=0 00:04:28.129 05:59:34 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:28.129 05:59:34 -- bdev/nbd_common.sh@109 -- # return 0 00:04:28.129 05:59:34 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:28.387 05:59:34 -- event/event.sh@35 -- # sleep 3 00:04:28.645 [2024-07-13 05:59:34.936848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:28.645 [2024-07-13 05:59:35.052039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.645 [2024-07-13 05:59:35.052044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.645 [2024-07-13 05:59:35.113725] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:28.645 [2024-07-13 05:59:35.113807] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:31.172 05:59:37 -- event/event.sh@38 -- # waitforlisten 996844 /var/tmp/spdk-nbd.sock 00:04:31.172 05:59:37 -- common/autotest_common.sh@819 -- # '[' -z 996844 ']' 00:04:31.172 05:59:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:31.172 05:59:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:31.172 05:59:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:31.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:31.172 05:59:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:31.172 05:59:37 -- common/autotest_common.sh@10 -- # set +x 00:04:31.430 05:59:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:31.430 05:59:37 -- common/autotest_common.sh@852 -- # return 0 00:04:31.430 05:59:37 -- event/event.sh@39 -- # killprocess 996844 00:04:31.430 05:59:37 -- common/autotest_common.sh@926 -- # '[' -z 996844 ']' 00:04:31.430 05:59:37 -- common/autotest_common.sh@930 -- # kill -0 996844 00:04:31.430 05:59:37 -- common/autotest_common.sh@931 -- # uname 00:04:31.430 05:59:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:31.430 05:59:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 996844 00:04:31.688 05:59:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:31.688 05:59:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:31.688 05:59:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 996844' 00:04:31.688 killing process with pid 996844 00:04:31.688 05:59:37 -- common/autotest_common.sh@945 -- # kill 996844 00:04:31.688 05:59:37 -- common/autotest_common.sh@950 -- # wait 996844 00:04:31.688 spdk_app_start is called in Round 0. 00:04:31.688 Shutdown signal received, stop current app iteration 00:04:31.688 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:04:31.688 spdk_app_start is called in Round 1. 00:04:31.688 Shutdown signal received, stop current app iteration 00:04:31.688 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:04:31.688 spdk_app_start is called in Round 2. 00:04:31.688 Shutdown signal received, stop current app iteration 00:04:31.688 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:04:31.688 spdk_app_start is called in Round 3. 00:04:31.688 Shutdown signal received, stop current app iteration 00:04:31.688 05:59:38 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:31.688 05:59:38 -- event/event.sh@42 -- # return 0 00:04:31.688 00:04:31.688 real 0m18.420s 00:04:31.688 user 0m39.703s 00:04:31.689 sys 0m3.209s 00:04:31.689 05:59:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.689 05:59:38 -- common/autotest_common.sh@10 -- # set +x 00:04:31.689 ************************************ 00:04:31.689 END TEST app_repeat 00:04:31.689 ************************************ 00:04:31.947 05:59:38 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:31.947 05:59:38 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:31.947 05:59:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.947 05:59:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.947 05:59:38 -- common/autotest_common.sh@10 -- # set +x 00:04:31.947 ************************************ 00:04:31.947 START TEST cpu_locks 00:04:31.947 ************************************ 00:04:31.947 05:59:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:31.947 * Looking for test storage... 00:04:31.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:31.947 05:59:38 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:31.947 05:59:38 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:31.947 05:59:38 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:31.947 05:59:38 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:31.947 05:59:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.947 05:59:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.947 05:59:38 -- common/autotest_common.sh@10 -- # set +x 00:04:31.947 ************************************ 00:04:31.947 START TEST default_locks 00:04:31.947 ************************************ 00:04:31.947 05:59:38 -- common/autotest_common.sh@1104 -- # default_locks 00:04:31.947 05:59:38 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=999384 00:04:31.947 05:59:38 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.947 05:59:38 -- event/cpu_locks.sh@47 -- # waitforlisten 999384 00:04:31.947 05:59:38 -- common/autotest_common.sh@819 -- # '[' -z 999384 ']' 00:04:31.947 05:59:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.947 05:59:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:31.947 05:59:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.947 05:59:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:31.947 05:59:38 -- common/autotest_common.sh@10 -- # set +x 00:04:31.947 [2024-07-13 05:59:38.301230] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:31.947 [2024-07-13 05:59:38.301321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid999384 ] 00:04:31.947 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.947 [2024-07-13 05:59:38.357713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.206 [2024-07-13 05:59:38.462712] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:32.206 [2024-07-13 05:59:38.462897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.772 05:59:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:32.772 05:59:39 -- common/autotest_common.sh@852 -- # return 0 00:04:32.772 05:59:39 -- event/cpu_locks.sh@49 -- # locks_exist 999384 00:04:32.772 05:59:39 -- event/cpu_locks.sh@22 -- # lslocks -p 999384 00:04:32.772 05:59:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:33.337 lslocks: write error 00:04:33.337 05:59:39 -- event/cpu_locks.sh@50 -- # killprocess 999384 00:04:33.337 05:59:39 -- common/autotest_common.sh@926 -- # '[' -z 999384 ']' 00:04:33.337 05:59:39 -- common/autotest_common.sh@930 -- # kill -0 999384 00:04:33.337 05:59:39 -- common/autotest_common.sh@931 -- # uname 00:04:33.337 05:59:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:33.337 05:59:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 999384 00:04:33.337 05:59:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:33.337 05:59:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:33.337 05:59:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 999384' 00:04:33.337 killing process with pid 999384 00:04:33.337 05:59:39 -- common/autotest_common.sh@945 -- # kill 999384 00:04:33.337 05:59:39 -- common/autotest_common.sh@950 -- # wait 999384 00:04:33.903 05:59:40 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 999384 00:04:33.903 05:59:40 -- common/autotest_common.sh@640 -- # local es=0 00:04:33.903 05:59:40 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 999384 00:04:33.903 05:59:40 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:04:33.903 05:59:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:33.903 05:59:40 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:04:33.903 05:59:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:33.903 05:59:40 -- common/autotest_common.sh@643 -- # waitforlisten 999384 00:04:33.903 05:59:40 -- common/autotest_common.sh@819 -- # '[' -z 999384 ']' 00:04:33.903 05:59:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.903 05:59:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:33.903 05:59:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.903 05:59:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:33.903 05:59:40 -- common/autotest_common.sh@10 -- # set +x 00:04:33.903 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (999384) - No such process 00:04:33.903 ERROR: process (pid: 999384) is no longer running 00:04:33.903 05:59:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:33.903 05:59:40 -- common/autotest_common.sh@852 -- # return 1 00:04:33.903 05:59:40 -- common/autotest_common.sh@643 -- # es=1 00:04:33.903 05:59:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:33.904 05:59:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:33.904 05:59:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:33.904 05:59:40 -- event/cpu_locks.sh@54 -- # no_locks 00:04:33.904 05:59:40 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:33.904 05:59:40 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:33.904 05:59:40 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:33.904 00:04:33.904 real 0m1.923s 00:04:33.904 user 0m2.043s 00:04:33.904 sys 0m0.590s 00:04:33.904 05:59:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.904 05:59:40 -- common/autotest_common.sh@10 -- # set +x 00:04:33.904 ************************************ 00:04:33.904 END TEST default_locks 00:04:33.904 ************************************ 00:04:33.904 05:59:40 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:33.904 05:59:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:33.904 05:59:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.904 05:59:40 -- common/autotest_common.sh@10 -- # set +x 00:04:33.904 ************************************ 00:04:33.904 START TEST default_locks_via_rpc 00:04:33.904 ************************************ 00:04:33.904 05:59:40 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:04:33.904 05:59:40 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=999560 00:04:33.904 05:59:40 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:33.904 05:59:40 -- event/cpu_locks.sh@63 -- # waitforlisten 999560 00:04:33.904 05:59:40 -- common/autotest_common.sh@819 -- # '[' -z 999560 ']' 00:04:33.904 05:59:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.904 05:59:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:33.904 05:59:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.904 05:59:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:33.904 05:59:40 -- common/autotest_common.sh@10 -- # set +x 00:04:33.904 [2024-07-13 05:59:40.256063] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:33.904 [2024-07-13 05:59:40.256146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid999560 ] 00:04:33.904 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.904 [2024-07-13 05:59:40.319689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.162 [2024-07-13 05:59:40.438676] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:34.162 [2024-07-13 05:59:40.438841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.728 05:59:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:34.728 05:59:41 -- common/autotest_common.sh@852 -- # return 0 00:04:34.728 05:59:41 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:34.728 05:59:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.728 05:59:41 -- common/autotest_common.sh@10 -- # set +x 00:04:34.728 05:59:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.728 05:59:41 -- event/cpu_locks.sh@67 -- # no_locks 00:04:34.728 05:59:41 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:34.728 05:59:41 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:34.728 05:59:41 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:34.728 05:59:41 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:34.728 05:59:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:34.728 05:59:41 -- common/autotest_common.sh@10 -- # set +x 00:04:34.728 05:59:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:34.728 05:59:41 -- event/cpu_locks.sh@71 -- # locks_exist 999560 00:04:34.728 05:59:41 -- event/cpu_locks.sh@22 -- # lslocks -p 999560 00:04:34.728 05:59:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:34.986 05:59:41 -- event/cpu_locks.sh@73 -- # killprocess 999560 00:04:34.986 05:59:41 -- common/autotest_common.sh@926 -- # '[' -z 999560 ']' 00:04:34.986 05:59:41 -- common/autotest_common.sh@930 -- # kill -0 999560 00:04:34.986 05:59:41 -- common/autotest_common.sh@931 -- # uname 00:04:34.986 05:59:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:34.986 05:59:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 999560 00:04:35.244 05:59:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:35.244 05:59:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:35.244 05:59:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 999560' 00:04:35.244 killing process with pid 999560 00:04:35.244 05:59:41 -- common/autotest_common.sh@945 -- # kill 999560 00:04:35.244 05:59:41 -- common/autotest_common.sh@950 -- # wait 999560 00:04:35.502 00:04:35.502 real 0m1.753s 00:04:35.502 user 0m1.861s 00:04:35.502 sys 0m0.566s 00:04:35.502 05:59:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.502 05:59:41 -- common/autotest_common.sh@10 -- # set +x 00:04:35.502 ************************************ 00:04:35.502 END TEST default_locks_via_rpc 00:04:35.502 ************************************ 00:04:35.502 05:59:41 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:35.502 05:59:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:35.502 05:59:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.502 05:59:41 -- common/autotest_common.sh@10 -- # set +x 00:04:35.502 ************************************ 00:04:35.502 START TEST non_locking_app_on_locked_coremask 00:04:35.502 ************************************ 00:04:35.502 05:59:41 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:04:35.502 05:59:41 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=999851 00:04:35.502 05:59:41 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.502 05:59:41 -- event/cpu_locks.sh@81 -- # waitforlisten 999851 /var/tmp/spdk.sock 00:04:35.502 05:59:41 -- common/autotest_common.sh@819 -- # '[' -z 999851 ']' 00:04:35.502 05:59:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.502 05:59:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:35.502 05:59:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.502 05:59:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:35.502 05:59:41 -- common/autotest_common.sh@10 -- # set +x 00:04:35.760 [2024-07-13 05:59:42.034445] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:35.760 [2024-07-13 05:59:42.034539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid999851 ] 00:04:35.760 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.760 [2024-07-13 05:59:42.096036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.760 [2024-07-13 05:59:42.213612] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:35.760 [2024-07-13 05:59:42.213795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.726 05:59:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:36.726 05:59:42 -- common/autotest_common.sh@852 -- # return 0 00:04:36.726 05:59:42 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=999992 00:04:36.726 05:59:42 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:36.726 05:59:42 -- event/cpu_locks.sh@85 -- # waitforlisten 999992 /var/tmp/spdk2.sock 00:04:36.726 05:59:42 -- common/autotest_common.sh@819 -- # '[' -z 999992 ']' 00:04:36.726 05:59:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:36.726 05:59:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:36.726 05:59:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:36.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:36.726 05:59:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:36.726 05:59:42 -- common/autotest_common.sh@10 -- # set +x 00:04:36.726 [2024-07-13 05:59:43.044010] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:36.726 [2024-07-13 05:59:43.044102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid999992 ] 00:04:36.726 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.726 [2024-07-13 05:59:43.135971] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:36.726 [2024-07-13 05:59:43.136003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.984 [2024-07-13 05:59:43.368166] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:36.984 [2024-07-13 05:59:43.368351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.550 05:59:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:37.550 05:59:43 -- common/autotest_common.sh@852 -- # return 0 00:04:37.550 05:59:43 -- event/cpu_locks.sh@87 -- # locks_exist 999851 00:04:37.550 05:59:43 -- event/cpu_locks.sh@22 -- # lslocks -p 999851 00:04:37.550 05:59:43 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:38.115 lslocks: write error 00:04:38.115 05:59:44 -- event/cpu_locks.sh@89 -- # killprocess 999851 00:04:38.115 05:59:44 -- common/autotest_common.sh@926 -- # '[' -z 999851 ']' 00:04:38.115 05:59:44 -- common/autotest_common.sh@930 -- # kill -0 999851 00:04:38.115 05:59:44 -- common/autotest_common.sh@931 -- # uname 00:04:38.115 05:59:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:38.115 05:59:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 999851 00:04:38.115 05:59:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:38.115 05:59:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:38.115 05:59:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 999851' 00:04:38.115 killing process with pid 999851 00:04:38.115 05:59:44 -- common/autotest_common.sh@945 -- # kill 999851 00:04:38.115 05:59:44 -- common/autotest_common.sh@950 -- # wait 999851 00:04:39.047 05:59:45 -- event/cpu_locks.sh@90 -- # killprocess 999992 00:04:39.047 05:59:45 -- common/autotest_common.sh@926 -- # '[' -z 999992 ']' 00:04:39.047 05:59:45 -- common/autotest_common.sh@930 -- # kill -0 999992 00:04:39.047 05:59:45 -- common/autotest_common.sh@931 -- # uname 00:04:39.047 05:59:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:39.047 05:59:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 999992 00:04:39.047 05:59:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:39.047 05:59:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:39.047 05:59:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 999992' 00:04:39.047 killing process with pid 999992 00:04:39.047 05:59:45 -- common/autotest_common.sh@945 -- # kill 999992 00:04:39.047 05:59:45 -- common/autotest_common.sh@950 -- # wait 999992 00:04:39.613 00:04:39.613 real 0m3.829s 00:04:39.613 user 0m4.147s 00:04:39.613 sys 0m1.088s 00:04:39.613 05:59:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.613 05:59:45 -- common/autotest_common.sh@10 -- # set +x 00:04:39.613 ************************************ 00:04:39.613 END TEST non_locking_app_on_locked_coremask 00:04:39.613 ************************************ 00:04:39.613 05:59:45 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:39.613 05:59:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.613 05:59:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.613 05:59:45 -- common/autotest_common.sh@10 -- # set +x 00:04:39.613 ************************************ 00:04:39.613 START TEST locking_app_on_unlocked_coremask 00:04:39.613 ************************************ 00:04:39.613 05:59:45 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:04:39.613 05:59:45 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1000308 00:04:39.613 05:59:45 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:39.613 05:59:45 -- event/cpu_locks.sh@99 -- # waitforlisten 1000308 /var/tmp/spdk.sock 00:04:39.613 05:59:45 -- common/autotest_common.sh@819 -- # '[' -z 1000308 ']' 00:04:39.613 05:59:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.613 05:59:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:39.613 05:59:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.613 05:59:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:39.613 05:59:45 -- common/autotest_common.sh@10 -- # set +x 00:04:39.613 [2024-07-13 05:59:45.893056] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:39.613 [2024-07-13 05:59:45.893141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000308 ] 00:04:39.613 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.613 [2024-07-13 05:59:45.956017] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:39.613 [2024-07-13 05:59:45.956055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.613 [2024-07-13 05:59:46.077725] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:39.613 [2024-07-13 05:59:46.077918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.546 05:59:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:40.546 05:59:46 -- common/autotest_common.sh@852 -- # return 0 00:04:40.546 05:59:46 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1000445 00:04:40.546 05:59:46 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:40.546 05:59:46 -- event/cpu_locks.sh@103 -- # waitforlisten 1000445 /var/tmp/spdk2.sock 00:04:40.546 05:59:46 -- common/autotest_common.sh@819 -- # '[' -z 1000445 ']' 00:04:40.546 05:59:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:40.546 05:59:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:40.546 05:59:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:40.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:40.546 05:59:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:40.546 05:59:46 -- common/autotest_common.sh@10 -- # set +x 00:04:40.546 [2024-07-13 05:59:46.875783] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:40.546 [2024-07-13 05:59:46.875892] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000445 ] 00:04:40.546 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.546 [2024-07-13 05:59:46.972457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.804 [2024-07-13 05:59:47.205837] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:40.804 [2024-07-13 05:59:47.206020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.370 05:59:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:41.370 05:59:47 -- common/autotest_common.sh@852 -- # return 0 00:04:41.370 05:59:47 -- event/cpu_locks.sh@105 -- # locks_exist 1000445 00:04:41.370 05:59:47 -- event/cpu_locks.sh@22 -- # lslocks -p 1000445 00:04:41.370 05:59:47 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:41.935 lslocks: write error 00:04:41.935 05:59:48 -- event/cpu_locks.sh@107 -- # killprocess 1000308 00:04:41.935 05:59:48 -- common/autotest_common.sh@926 -- # '[' -z 1000308 ']' 00:04:41.935 05:59:48 -- common/autotest_common.sh@930 -- # kill -0 1000308 00:04:41.935 05:59:48 -- common/autotest_common.sh@931 -- # uname 00:04:41.935 05:59:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:41.935 05:59:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1000308 00:04:41.935 05:59:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:41.935 05:59:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:41.935 05:59:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1000308' 00:04:41.935 killing process with pid 1000308 00:04:41.935 05:59:48 -- common/autotest_common.sh@945 -- # kill 1000308 00:04:41.935 05:59:48 -- common/autotest_common.sh@950 -- # wait 1000308 00:04:42.868 05:59:49 -- event/cpu_locks.sh@108 -- # killprocess 1000445 00:04:42.868 05:59:49 -- common/autotest_common.sh@926 -- # '[' -z 1000445 ']' 00:04:42.868 05:59:49 -- common/autotest_common.sh@930 -- # kill -0 1000445 00:04:42.868 05:59:49 -- common/autotest_common.sh@931 -- # uname 00:04:42.868 05:59:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:42.868 05:59:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1000445 00:04:42.868 05:59:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:42.868 05:59:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:42.868 05:59:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1000445' 00:04:42.868 killing process with pid 1000445 00:04:42.868 05:59:49 -- common/autotest_common.sh@945 -- # kill 1000445 00:04:42.868 05:59:49 -- common/autotest_common.sh@950 -- # wait 1000445 00:04:43.433 00:04:43.433 real 0m3.887s 00:04:43.434 user 0m4.206s 00:04:43.434 sys 0m1.089s 00:04:43.434 05:59:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.434 05:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:43.434 ************************************ 00:04:43.434 END TEST locking_app_on_unlocked_coremask 00:04:43.434 ************************************ 00:04:43.434 05:59:49 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:43.434 05:59:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:43.434 05:59:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:43.434 05:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:43.434 ************************************ 00:04:43.434 START TEST locking_app_on_locked_coremask 00:04:43.434 ************************************ 00:04:43.434 05:59:49 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:04:43.434 05:59:49 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1000884 00:04:43.434 05:59:49 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.434 05:59:49 -- event/cpu_locks.sh@116 -- # waitforlisten 1000884 /var/tmp/spdk.sock 00:04:43.434 05:59:49 -- common/autotest_common.sh@819 -- # '[' -z 1000884 ']' 00:04:43.434 05:59:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.434 05:59:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:43.434 05:59:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.434 05:59:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:43.434 05:59:49 -- common/autotest_common.sh@10 -- # set +x 00:04:43.434 [2024-07-13 05:59:49.808257] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:43.434 [2024-07-13 05:59:49.808352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000884 ] 00:04:43.434 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.434 [2024-07-13 05:59:49.864688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.692 [2024-07-13 05:59:49.974380] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:43.692 [2024-07-13 05:59:49.974553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.257 05:59:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:44.257 05:59:50 -- common/autotest_common.sh@852 -- # return 0 00:04:44.257 05:59:50 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1001026 00:04:44.257 05:59:50 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:44.257 05:59:50 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1001026 /var/tmp/spdk2.sock 00:04:44.257 05:59:50 -- common/autotest_common.sh@640 -- # local es=0 00:04:44.257 05:59:50 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1001026 /var/tmp/spdk2.sock 00:04:44.257 05:59:50 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:04:44.257 05:59:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:44.257 05:59:50 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:04:44.257 05:59:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:44.257 05:59:50 -- common/autotest_common.sh@643 -- # waitforlisten 1001026 /var/tmp/spdk2.sock 00:04:44.257 05:59:50 -- common/autotest_common.sh@819 -- # '[' -z 1001026 ']' 00:04:44.257 05:59:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:44.257 05:59:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:44.257 05:59:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:44.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:44.257 05:59:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:44.257 05:59:50 -- common/autotest_common.sh@10 -- # set +x 00:04:44.515 [2024-07-13 05:59:50.780874] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:44.515 [2024-07-13 05:59:50.780949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001026 ] 00:04:44.515 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.515 [2024-07-13 05:59:50.877368] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1000884 has claimed it. 00:04:44.515 [2024-07-13 05:59:50.877438] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:45.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1001026) - No such process 00:04:45.080 ERROR: process (pid: 1001026) is no longer running 00:04:45.080 05:59:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:45.080 05:59:51 -- common/autotest_common.sh@852 -- # return 1 00:04:45.080 05:59:51 -- common/autotest_common.sh@643 -- # es=1 00:04:45.080 05:59:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:45.080 05:59:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:45.080 05:59:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:45.080 05:59:51 -- event/cpu_locks.sh@122 -- # locks_exist 1000884 00:04:45.080 05:59:51 -- event/cpu_locks.sh@22 -- # lslocks -p 1000884 00:04:45.080 05:59:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:45.338 lslocks: write error 00:04:45.338 05:59:51 -- event/cpu_locks.sh@124 -- # killprocess 1000884 00:04:45.338 05:59:51 -- common/autotest_common.sh@926 -- # '[' -z 1000884 ']' 00:04:45.338 05:59:51 -- common/autotest_common.sh@930 -- # kill -0 1000884 00:04:45.338 05:59:51 -- common/autotest_common.sh@931 -- # uname 00:04:45.338 05:59:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:45.338 05:59:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1000884 00:04:45.338 05:59:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:45.338 05:59:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:45.338 05:59:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1000884' 00:04:45.338 killing process with pid 1000884 00:04:45.338 05:59:51 -- common/autotest_common.sh@945 -- # kill 1000884 00:04:45.338 05:59:51 -- common/autotest_common.sh@950 -- # wait 1000884 00:04:45.903 00:04:45.903 real 0m2.536s 00:04:45.904 user 0m2.910s 00:04:45.904 sys 0m0.644s 00:04:45.904 05:59:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.904 05:59:52 -- common/autotest_common.sh@10 -- # set +x 00:04:45.904 ************************************ 00:04:45.904 END TEST locking_app_on_locked_coremask 00:04:45.904 ************************************ 00:04:45.904 05:59:52 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:45.904 05:59:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:45.904 05:59:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:45.904 05:59:52 -- common/autotest_common.sh@10 -- # set +x 00:04:45.904 ************************************ 00:04:45.904 START TEST locking_overlapped_coremask 00:04:45.904 ************************************ 00:04:45.904 05:59:52 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:04:45.904 05:59:52 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1001196 00:04:45.904 05:59:52 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:45.904 05:59:52 -- event/cpu_locks.sh@133 -- # waitforlisten 1001196 /var/tmp/spdk.sock 00:04:45.904 05:59:52 -- common/autotest_common.sh@819 -- # '[' -z 1001196 ']' 00:04:45.904 05:59:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.904 05:59:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:45.904 05:59:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.904 05:59:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:45.904 05:59:52 -- common/autotest_common.sh@10 -- # set +x 00:04:45.904 [2024-07-13 05:59:52.367815] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:45.904 [2024-07-13 05:59:52.367918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001196 ] 00:04:45.904 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.162 [2024-07-13 05:59:52.426054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:46.162 [2024-07-13 05:59:52.534959] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:46.162 [2024-07-13 05:59:52.535163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.162 [2024-07-13 05:59:52.535224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:46.162 [2024-07-13 05:59:52.535227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.094 05:59:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:47.094 05:59:53 -- common/autotest_common.sh@852 -- # return 0 00:04:47.094 05:59:53 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1001340 00:04:47.094 05:59:53 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1001340 /var/tmp/spdk2.sock 00:04:47.094 05:59:53 -- common/autotest_common.sh@640 -- # local es=0 00:04:47.094 05:59:53 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1001340 /var/tmp/spdk2.sock 00:04:47.094 05:59:53 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:04:47.094 05:59:53 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:47.094 05:59:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:47.094 05:59:53 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:04:47.094 05:59:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:47.094 05:59:53 -- common/autotest_common.sh@643 -- # waitforlisten 1001340 /var/tmp/spdk2.sock 00:04:47.094 05:59:53 -- common/autotest_common.sh@819 -- # '[' -z 1001340 ']' 00:04:47.094 05:59:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.094 05:59:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:47.094 05:59:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.094 05:59:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:47.094 05:59:53 -- common/autotest_common.sh@10 -- # set +x 00:04:47.094 [2024-07-13 05:59:53.345274] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:47.094 [2024-07-13 05:59:53.345359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001340 ] 00:04:47.094 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.094 [2024-07-13 05:59:53.433342] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1001196 has claimed it. 00:04:47.094 [2024-07-13 05:59:53.433402] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:47.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1001340) - No such process 00:04:47.659 ERROR: process (pid: 1001340) is no longer running 00:04:47.659 05:59:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:47.659 05:59:54 -- common/autotest_common.sh@852 -- # return 1 00:04:47.659 05:59:54 -- common/autotest_common.sh@643 -- # es=1 00:04:47.659 05:59:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:47.659 05:59:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:47.659 05:59:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:47.659 05:59:54 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:47.659 05:59:54 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:47.659 05:59:54 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:47.659 05:59:54 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:47.659 05:59:54 -- event/cpu_locks.sh@141 -- # killprocess 1001196 00:04:47.659 05:59:54 -- common/autotest_common.sh@926 -- # '[' -z 1001196 ']' 00:04:47.659 05:59:54 -- common/autotest_common.sh@930 -- # kill -0 1001196 00:04:47.659 05:59:54 -- common/autotest_common.sh@931 -- # uname 00:04:47.659 05:59:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:47.659 05:59:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1001196 00:04:47.659 05:59:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:47.659 05:59:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:47.659 05:59:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1001196' 00:04:47.659 killing process with pid 1001196 00:04:47.659 05:59:54 -- common/autotest_common.sh@945 -- # kill 1001196 00:04:47.659 05:59:54 -- common/autotest_common.sh@950 -- # wait 1001196 00:04:48.226 00:04:48.226 real 0m2.194s 00:04:48.226 user 0m6.162s 00:04:48.226 sys 0m0.461s 00:04:48.227 05:59:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.227 05:59:54 -- common/autotest_common.sh@10 -- # set +x 00:04:48.227 ************************************ 00:04:48.227 END TEST locking_overlapped_coremask 00:04:48.227 ************************************ 00:04:48.227 05:59:54 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:48.227 05:59:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:48.227 05:59:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:48.227 05:59:54 -- common/autotest_common.sh@10 -- # set +x 00:04:48.227 ************************************ 00:04:48.227 START TEST locking_overlapped_coremask_via_rpc 00:04:48.227 ************************************ 00:04:48.227 05:59:54 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:04:48.227 05:59:54 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1001504 00:04:48.227 05:59:54 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:48.227 05:59:54 -- event/cpu_locks.sh@149 -- # waitforlisten 1001504 /var/tmp/spdk.sock 00:04:48.227 05:59:54 -- common/autotest_common.sh@819 -- # '[' -z 1001504 ']' 00:04:48.227 05:59:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.227 05:59:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:48.227 05:59:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.227 05:59:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:48.227 05:59:54 -- common/autotest_common.sh@10 -- # set +x 00:04:48.227 [2024-07-13 05:59:54.589123] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:48.227 [2024-07-13 05:59:54.589213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001504 ] 00:04:48.227 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.227 [2024-07-13 05:59:54.650936] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:48.227 [2024-07-13 05:59:54.650982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:48.485 [2024-07-13 05:59:54.763844] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:48.485 [2024-07-13 05:59:54.764059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.485 [2024-07-13 05:59:54.764116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.485 [2024-07-13 05:59:54.764119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.055 05:59:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:49.055 05:59:55 -- common/autotest_common.sh@852 -- # return 0 00:04:49.055 05:59:55 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1001646 00:04:49.055 05:59:55 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:49.055 05:59:55 -- event/cpu_locks.sh@153 -- # waitforlisten 1001646 /var/tmp/spdk2.sock 00:04:49.055 05:59:55 -- common/autotest_common.sh@819 -- # '[' -z 1001646 ']' 00:04:49.055 05:59:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:49.055 05:59:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:49.055 05:59:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:49.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:49.055 05:59:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:49.055 05:59:55 -- common/autotest_common.sh@10 -- # set +x 00:04:49.055 [2024-07-13 05:59:55.540050] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:49.056 [2024-07-13 05:59:55.540126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001646 ] 00:04:49.330 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.330 [2024-07-13 05:59:55.628544] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:49.330 [2024-07-13 05:59:55.628582] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:49.588 [2024-07-13 05:59:55.851017] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:49.588 [2024-07-13 05:59:55.851238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.588 [2024-07-13 05:59:55.851290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:04:49.588 [2024-07-13 05:59:55.851291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:50.154 05:59:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:50.154 05:59:56 -- common/autotest_common.sh@852 -- # return 0 00:04:50.154 05:59:56 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:50.154 05:59:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.154 05:59:56 -- common/autotest_common.sh@10 -- # set +x 00:04:50.154 05:59:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:50.154 05:59:56 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:50.154 05:59:56 -- common/autotest_common.sh@640 -- # local es=0 00:04:50.154 05:59:56 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:50.154 05:59:56 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:04:50.154 05:59:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:50.154 05:59:56 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:04:50.154 05:59:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:50.154 05:59:56 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:50.154 05:59:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:50.154 05:59:56 -- common/autotest_common.sh@10 -- # set +x 00:04:50.154 [2024-07-13 05:59:56.447973] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1001504 has claimed it. 00:04:50.154 request: 00:04:50.154 { 00:04:50.154 "method": "framework_enable_cpumask_locks", 00:04:50.154 "req_id": 1 00:04:50.154 } 00:04:50.154 Got JSON-RPC error response 00:04:50.154 response: 00:04:50.154 { 00:04:50.154 "code": -32603, 00:04:50.154 "message": "Failed to claim CPU core: 2" 00:04:50.154 } 00:04:50.154 05:59:56 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:04:50.154 05:59:56 -- common/autotest_common.sh@643 -- # es=1 00:04:50.154 05:59:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:50.154 05:59:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:50.154 05:59:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:50.154 05:59:56 -- event/cpu_locks.sh@158 -- # waitforlisten 1001504 /var/tmp/spdk.sock 00:04:50.154 05:59:56 -- common/autotest_common.sh@819 -- # '[' -z 1001504 ']' 00:04:50.154 05:59:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.154 05:59:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:50.154 05:59:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.154 05:59:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:50.154 05:59:56 -- common/autotest_common.sh@10 -- # set +x 00:04:50.412 05:59:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:50.412 05:59:56 -- common/autotest_common.sh@852 -- # return 0 00:04:50.412 05:59:56 -- event/cpu_locks.sh@159 -- # waitforlisten 1001646 /var/tmp/spdk2.sock 00:04:50.412 05:59:56 -- common/autotest_common.sh@819 -- # '[' -z 1001646 ']' 00:04:50.412 05:59:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.412 05:59:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:50.412 05:59:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.412 05:59:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:50.412 05:59:56 -- common/autotest_common.sh@10 -- # set +x 00:04:50.670 05:59:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:50.670 05:59:56 -- common/autotest_common.sh@852 -- # return 0 00:04:50.670 05:59:56 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:50.670 05:59:56 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:50.670 05:59:56 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:50.670 05:59:56 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:50.670 00:04:50.670 real 0m2.383s 00:04:50.670 user 0m1.135s 00:04:50.670 sys 0m0.184s 00:04:50.670 05:59:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.670 05:59:56 -- common/autotest_common.sh@10 -- # set +x 00:04:50.670 ************************************ 00:04:50.670 END TEST locking_overlapped_coremask_via_rpc 00:04:50.670 ************************************ 00:04:50.670 05:59:56 -- event/cpu_locks.sh@174 -- # cleanup 00:04:50.670 05:59:56 -- event/cpu_locks.sh@15 -- # [[ -z 1001504 ]] 00:04:50.670 05:59:56 -- event/cpu_locks.sh@15 -- # killprocess 1001504 00:04:50.670 05:59:56 -- common/autotest_common.sh@926 -- # '[' -z 1001504 ']' 00:04:50.670 05:59:56 -- common/autotest_common.sh@930 -- # kill -0 1001504 00:04:50.670 05:59:56 -- common/autotest_common.sh@931 -- # uname 00:04:50.670 05:59:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:50.670 05:59:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1001504 00:04:50.670 05:59:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:50.670 05:59:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:50.670 05:59:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1001504' 00:04:50.670 killing process with pid 1001504 00:04:50.670 05:59:56 -- common/autotest_common.sh@945 -- # kill 1001504 00:04:50.670 05:59:56 -- common/autotest_common.sh@950 -- # wait 1001504 00:04:50.928 05:59:57 -- event/cpu_locks.sh@16 -- # [[ -z 1001646 ]] 00:04:50.928 05:59:57 -- event/cpu_locks.sh@16 -- # killprocess 1001646 00:04:50.928 05:59:57 -- common/autotest_common.sh@926 -- # '[' -z 1001646 ']' 00:04:50.928 05:59:57 -- common/autotest_common.sh@930 -- # kill -0 1001646 00:04:50.928 05:59:57 -- common/autotest_common.sh@931 -- # uname 00:04:50.928 05:59:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:50.928 05:59:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1001646 00:04:51.186 05:59:57 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:04:51.186 05:59:57 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:04:51.186 05:59:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1001646' 00:04:51.186 killing process with pid 1001646 00:04:51.186 05:59:57 -- common/autotest_common.sh@945 -- # kill 1001646 00:04:51.186 05:59:57 -- common/autotest_common.sh@950 -- # wait 1001646 00:04:51.444 05:59:57 -- event/cpu_locks.sh@18 -- # rm -f 00:04:51.444 05:59:57 -- event/cpu_locks.sh@1 -- # cleanup 00:04:51.444 05:59:57 -- event/cpu_locks.sh@15 -- # [[ -z 1001504 ]] 00:04:51.444 05:59:57 -- event/cpu_locks.sh@15 -- # killprocess 1001504 00:04:51.444 05:59:57 -- common/autotest_common.sh@926 -- # '[' -z 1001504 ']' 00:04:51.444 05:59:57 -- common/autotest_common.sh@930 -- # kill -0 1001504 00:04:51.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1001504) - No such process 00:04:51.444 05:59:57 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1001504 is not found' 00:04:51.444 Process with pid 1001504 is not found 00:04:51.444 05:59:57 -- event/cpu_locks.sh@16 -- # [[ -z 1001646 ]] 00:04:51.444 05:59:57 -- event/cpu_locks.sh@16 -- # killprocess 1001646 00:04:51.444 05:59:57 -- common/autotest_common.sh@926 -- # '[' -z 1001646 ']' 00:04:51.444 05:59:57 -- common/autotest_common.sh@930 -- # kill -0 1001646 00:04:51.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1001646) - No such process 00:04:51.444 05:59:57 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1001646 is not found' 00:04:51.444 Process with pid 1001646 is not found 00:04:51.444 05:59:57 -- event/cpu_locks.sh@18 -- # rm -f 00:04:51.444 00:04:51.444 real 0m19.698s 00:04:51.444 user 0m34.393s 00:04:51.444 sys 0m5.428s 00:04:51.444 05:59:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.444 05:59:57 -- common/autotest_common.sh@10 -- # set +x 00:04:51.444 ************************************ 00:04:51.444 END TEST cpu_locks 00:04:51.445 ************************************ 00:04:51.445 00:04:51.445 real 0m46.086s 00:04:51.445 user 1m26.528s 00:04:51.445 sys 0m9.344s 00:04:51.445 05:59:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.445 05:59:57 -- common/autotest_common.sh@10 -- # set +x 00:04:51.445 ************************************ 00:04:51.445 END TEST event 00:04:51.445 ************************************ 00:04:51.445 05:59:57 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:51.445 05:59:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:51.445 05:59:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.445 05:59:57 -- common/autotest_common.sh@10 -- # set +x 00:04:51.445 ************************************ 00:04:51.445 START TEST thread 00:04:51.445 ************************************ 00:04:51.445 05:59:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:51.703 * Looking for test storage... 00:04:51.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:51.703 05:59:57 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:51.703 05:59:58 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:04:51.703 05:59:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.703 05:59:58 -- common/autotest_common.sh@10 -- # set +x 00:04:51.703 ************************************ 00:04:51.703 START TEST thread_poller_perf 00:04:51.703 ************************************ 00:04:51.703 05:59:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:51.703 [2024-07-13 05:59:58.020220] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:51.703 [2024-07-13 05:59:58.020306] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1002013 ] 00:04:51.703 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.703 [2024-07-13 05:59:58.082418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.703 [2024-07-13 05:59:58.190278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.703 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:53.075 ====================================== 00:04:53.075 busy:2709502631 (cyc) 00:04:53.075 total_run_count: 283000 00:04:53.075 tsc_hz: 2700000000 (cyc) 00:04:53.075 ====================================== 00:04:53.075 poller_cost: 9574 (cyc), 3545 (nsec) 00:04:53.075 00:04:53.075 real 0m1.317s 00:04:53.075 user 0m1.226s 00:04:53.075 sys 0m0.081s 00:04:53.075 05:59:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.075 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:04:53.075 ************************************ 00:04:53.075 END TEST thread_poller_perf 00:04:53.075 ************************************ 00:04:53.075 05:59:59 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:53.075 05:59:59 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:04:53.075 05:59:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.075 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:04:53.075 ************************************ 00:04:53.075 START TEST thread_poller_perf 00:04:53.075 ************************************ 00:04:53.075 05:59:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:53.075 [2024-07-13 05:59:59.363705] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:53.075 [2024-07-13 05:59:59.363799] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1002175 ] 00:04:53.075 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.075 [2024-07-13 05:59:59.429538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.075 [2024-07-13 05:59:59.547254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.075 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:54.449 ====================================== 00:04:54.449 busy:2703150517 (cyc) 00:04:54.449 total_run_count: 3825000 00:04:54.449 tsc_hz: 2700000000 (cyc) 00:04:54.449 ====================================== 00:04:54.449 poller_cost: 706 (cyc), 261 (nsec) 00:04:54.449 00:04:54.449 real 0m1.319s 00:04:54.449 user 0m1.222s 00:04:54.449 sys 0m0.089s 00:04:54.449 06:00:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.449 06:00:00 -- common/autotest_common.sh@10 -- # set +x 00:04:54.449 ************************************ 00:04:54.449 END TEST thread_poller_perf 00:04:54.449 ************************************ 00:04:54.449 06:00:00 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:54.449 00:04:54.449 real 0m2.739s 00:04:54.449 user 0m2.489s 00:04:54.449 sys 0m0.246s 00:04:54.449 06:00:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.449 06:00:00 -- common/autotest_common.sh@10 -- # set +x 00:04:54.449 ************************************ 00:04:54.449 END TEST thread 00:04:54.449 ************************************ 00:04:54.449 06:00:00 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:54.449 06:00:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.449 06:00:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.449 06:00:00 -- common/autotest_common.sh@10 -- # set +x 00:04:54.449 ************************************ 00:04:54.449 START TEST accel 00:04:54.449 ************************************ 00:04:54.449 06:00:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:54.449 * Looking for test storage... 00:04:54.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:04:54.449 06:00:00 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:04:54.449 06:00:00 -- accel/accel.sh@74 -- # get_expected_opcs 00:04:54.449 06:00:00 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:54.449 06:00:00 -- accel/accel.sh@59 -- # spdk_tgt_pid=1002544 00:04:54.449 06:00:00 -- accel/accel.sh@60 -- # waitforlisten 1002544 00:04:54.449 06:00:00 -- common/autotest_common.sh@819 -- # '[' -z 1002544 ']' 00:04:54.449 06:00:00 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:04:54.449 06:00:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.449 06:00:00 -- accel/accel.sh@58 -- # build_accel_config 00:04:54.449 06:00:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:54.449 06:00:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:54.449 06:00:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.449 06:00:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.449 06:00:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:54.449 06:00:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.449 06:00:00 -- common/autotest_common.sh@10 -- # set +x 00:04:54.449 06:00:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:54.449 06:00:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:54.449 06:00:00 -- accel/accel.sh@41 -- # local IFS=, 00:04:54.449 06:00:00 -- accel/accel.sh@42 -- # jq -r . 00:04:54.449 [2024-07-13 06:00:00.803923] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:54.449 [2024-07-13 06:00:00.804010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1002544 ] 00:04:54.449 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.449 [2024-07-13 06:00:00.867375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.708 [2024-07-13 06:00:00.984442] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:54.708 [2024-07-13 06:00:00.984624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.272 06:00:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:55.272 06:00:01 -- common/autotest_common.sh@852 -- # return 0 00:04:55.272 06:00:01 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:55.272 06:00:01 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:04:55.272 06:00:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:55.272 06:00:01 -- common/autotest_common.sh@10 -- # set +x 00:04:55.272 06:00:01 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:55.272 06:00:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:55.272 06:00:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # IFS== 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # read -r opc module 00:04:55.272 06:00:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:55.272 06:00:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # IFS== 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # read -r opc module 00:04:55.272 06:00:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:55.272 06:00:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # IFS== 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # read -r opc module 00:04:55.272 06:00:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:55.272 06:00:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # IFS== 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # read -r opc module 00:04:55.272 06:00:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:55.272 06:00:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # IFS== 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # read -r opc module 00:04:55.272 06:00:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:55.272 06:00:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # IFS== 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # read -r opc module 00:04:55.272 06:00:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:55.272 06:00:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # IFS== 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # read -r opc module 00:04:55.272 06:00:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:55.272 06:00:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # IFS== 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # read -r opc module 00:04:55.272 06:00:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:55.272 06:00:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # IFS== 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # read -r opc module 00:04:55.272 06:00:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:55.272 06:00:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # IFS== 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # read -r opc module 00:04:55.272 06:00:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:55.272 06:00:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # IFS== 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # read -r opc module 00:04:55.272 06:00:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:55.272 06:00:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # IFS== 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # read -r opc module 00:04:55.272 06:00:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:55.272 06:00:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # IFS== 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # read -r opc module 00:04:55.272 06:00:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:55.272 06:00:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # IFS== 00:04:55.272 06:00:01 -- accel/accel.sh@64 -- # read -r opc module 00:04:55.272 06:00:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:04:55.272 06:00:01 -- accel/accel.sh@67 -- # killprocess 1002544 00:04:55.272 06:00:01 -- common/autotest_common.sh@926 -- # '[' -z 1002544 ']' 00:04:55.272 06:00:01 -- common/autotest_common.sh@930 -- # kill -0 1002544 00:04:55.272 06:00:01 -- common/autotest_common.sh@931 -- # uname 00:04:55.272 06:00:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:55.272 06:00:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1002544 00:04:55.529 06:00:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:55.529 06:00:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:55.529 06:00:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1002544' 00:04:55.529 killing process with pid 1002544 00:04:55.529 06:00:01 -- common/autotest_common.sh@945 -- # kill 1002544 00:04:55.529 06:00:01 -- common/autotest_common.sh@950 -- # wait 1002544 00:04:55.786 06:00:02 -- accel/accel.sh@68 -- # trap - ERR 00:04:55.786 06:00:02 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:04:55.786 06:00:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:04:55.786 06:00:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.786 06:00:02 -- common/autotest_common.sh@10 -- # set +x 00:04:55.786 06:00:02 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:04:55.786 06:00:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:04:55.786 06:00:02 -- accel/accel.sh@12 -- # build_accel_config 00:04:55.786 06:00:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:55.786 06:00:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:55.786 06:00:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:55.786 06:00:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:55.786 06:00:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:55.786 06:00:02 -- accel/accel.sh@41 -- # local IFS=, 00:04:55.786 06:00:02 -- accel/accel.sh@42 -- # jq -r . 00:04:55.786 06:00:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.786 06:00:02 -- common/autotest_common.sh@10 -- # set +x 00:04:55.786 06:00:02 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:55.786 06:00:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:04:55.786 06:00:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.786 06:00:02 -- common/autotest_common.sh@10 -- # set +x 00:04:55.786 ************************************ 00:04:55.786 START TEST accel_missing_filename 00:04:55.786 ************************************ 00:04:55.786 06:00:02 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:04:55.786 06:00:02 -- common/autotest_common.sh@640 -- # local es=0 00:04:55.786 06:00:02 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:55.786 06:00:02 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:04:55.786 06:00:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:55.786 06:00:02 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:04:55.786 06:00:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:55.786 06:00:02 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:04:55.786 06:00:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:04:55.786 06:00:02 -- accel/accel.sh@12 -- # build_accel_config 00:04:55.786 06:00:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:55.786 06:00:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:55.786 06:00:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:55.786 06:00:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:55.786 06:00:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:55.786 06:00:02 -- accel/accel.sh@41 -- # local IFS=, 00:04:55.786 06:00:02 -- accel/accel.sh@42 -- # jq -r . 00:04:56.043 [2024-07-13 06:00:02.302519] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:56.043 [2024-07-13 06:00:02.302597] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1002787 ] 00:04:56.043 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.043 [2024-07-13 06:00:02.366419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.043 [2024-07-13 06:00:02.483651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.043 [2024-07-13 06:00:02.545497] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:56.301 [2024-07-13 06:00:02.630804] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:04:56.301 A filename is required. 00:04:56.301 06:00:02 -- common/autotest_common.sh@643 -- # es=234 00:04:56.301 06:00:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:56.301 06:00:02 -- common/autotest_common.sh@652 -- # es=106 00:04:56.301 06:00:02 -- common/autotest_common.sh@653 -- # case "$es" in 00:04:56.301 06:00:02 -- common/autotest_common.sh@660 -- # es=1 00:04:56.301 06:00:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:56.301 00:04:56.301 real 0m0.471s 00:04:56.301 user 0m0.353s 00:04:56.301 sys 0m0.151s 00:04:56.301 06:00:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.301 06:00:02 -- common/autotest_common.sh@10 -- # set +x 00:04:56.301 ************************************ 00:04:56.301 END TEST accel_missing_filename 00:04:56.301 ************************************ 00:04:56.301 06:00:02 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:56.301 06:00:02 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:04:56.301 06:00:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.301 06:00:02 -- common/autotest_common.sh@10 -- # set +x 00:04:56.301 ************************************ 00:04:56.301 START TEST accel_compress_verify 00:04:56.301 ************************************ 00:04:56.301 06:00:02 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:56.301 06:00:02 -- common/autotest_common.sh@640 -- # local es=0 00:04:56.301 06:00:02 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:56.301 06:00:02 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:04:56.301 06:00:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:56.301 06:00:02 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:04:56.301 06:00:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:56.301 06:00:02 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:56.301 06:00:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:56.301 06:00:02 -- accel/accel.sh@12 -- # build_accel_config 00:04:56.301 06:00:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:56.301 06:00:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.301 06:00:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.301 06:00:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:56.301 06:00:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:56.301 06:00:02 -- accel/accel.sh@41 -- # local IFS=, 00:04:56.301 06:00:02 -- accel/accel.sh@42 -- # jq -r . 00:04:56.301 [2024-07-13 06:00:02.803368] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:56.301 [2024-07-13 06:00:02.803459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1002826 ] 00:04:56.558 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.558 [2024-07-13 06:00:02.868960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.558 [2024-07-13 06:00:02.987929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.558 [2024-07-13 06:00:03.044881] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:56.815 [2024-07-13 06:00:03.121831] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:04:56.815 00:04:56.815 Compression does not support the verify option, aborting. 00:04:56.815 06:00:03 -- common/autotest_common.sh@643 -- # es=161 00:04:56.815 06:00:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:56.815 06:00:03 -- common/autotest_common.sh@652 -- # es=33 00:04:56.815 06:00:03 -- common/autotest_common.sh@653 -- # case "$es" in 00:04:56.815 06:00:03 -- common/autotest_common.sh@660 -- # es=1 00:04:56.815 06:00:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:56.815 00:04:56.815 real 0m0.464s 00:04:56.815 user 0m0.351s 00:04:56.815 sys 0m0.147s 00:04:56.815 06:00:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.815 06:00:03 -- common/autotest_common.sh@10 -- # set +x 00:04:56.815 ************************************ 00:04:56.815 END TEST accel_compress_verify 00:04:56.815 ************************************ 00:04:56.815 06:00:03 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:56.815 06:00:03 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:04:56.815 06:00:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.815 06:00:03 -- common/autotest_common.sh@10 -- # set +x 00:04:56.815 ************************************ 00:04:56.815 START TEST accel_wrong_workload 00:04:56.815 ************************************ 00:04:56.815 06:00:03 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:04:56.815 06:00:03 -- common/autotest_common.sh@640 -- # local es=0 00:04:56.815 06:00:03 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:56.815 06:00:03 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:04:56.815 06:00:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:56.815 06:00:03 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:04:56.815 06:00:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:56.815 06:00:03 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:04:56.815 06:00:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:04:56.815 06:00:03 -- accel/accel.sh@12 -- # build_accel_config 00:04:56.815 06:00:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:56.815 06:00:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.815 06:00:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.815 06:00:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:56.815 06:00:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:56.815 06:00:03 -- accel/accel.sh@41 -- # local IFS=, 00:04:56.815 06:00:03 -- accel/accel.sh@42 -- # jq -r . 00:04:56.815 Unsupported workload type: foobar 00:04:56.815 [2024-07-13 06:00:03.289003] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:56.815 accel_perf options: 00:04:56.815 [-h help message] 00:04:56.815 [-q queue depth per core] 00:04:56.815 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:56.815 [-T number of threads per core 00:04:56.815 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:56.815 [-t time in seconds] 00:04:56.815 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:56.815 [ dif_verify, , dif_generate, dif_generate_copy 00:04:56.815 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:56.815 [-l for compress/decompress workloads, name of uncompressed input file 00:04:56.815 [-S for crc32c workload, use this seed value (default 0) 00:04:56.815 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:56.815 [-f for fill workload, use this BYTE value (default 255) 00:04:56.815 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:56.815 [-y verify result if this switch is on] 00:04:56.815 [-a tasks to allocate per core (default: same value as -q)] 00:04:56.815 Can be used to spread operations across a wider range of memory. 00:04:56.815 06:00:03 -- common/autotest_common.sh@643 -- # es=1 00:04:56.815 06:00:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:56.815 06:00:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:56.815 06:00:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:56.815 00:04:56.815 real 0m0.022s 00:04:56.815 user 0m0.011s 00:04:56.815 sys 0m0.010s 00:04:56.815 06:00:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.815 06:00:03 -- common/autotest_common.sh@10 -- # set +x 00:04:56.815 ************************************ 00:04:56.815 END TEST accel_wrong_workload 00:04:56.815 ************************************ 00:04:56.815 Error: writing output failed: Broken pipe 00:04:56.815 06:00:03 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:56.815 06:00:03 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:04:56.815 06:00:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.815 06:00:03 -- common/autotest_common.sh@10 -- # set +x 00:04:56.815 ************************************ 00:04:56.815 START TEST accel_negative_buffers 00:04:56.815 ************************************ 00:04:56.815 06:00:03 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:56.815 06:00:03 -- common/autotest_common.sh@640 -- # local es=0 00:04:56.815 06:00:03 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:56.815 06:00:03 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:04:56.815 06:00:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:56.815 06:00:03 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:04:56.816 06:00:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:56.816 06:00:03 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:04:56.816 06:00:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:04:56.816 06:00:03 -- accel/accel.sh@12 -- # build_accel_config 00:04:56.816 06:00:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:56.816 06:00:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.816 06:00:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.816 06:00:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:56.816 06:00:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:56.816 06:00:03 -- accel/accel.sh@41 -- # local IFS=, 00:04:56.816 06:00:03 -- accel/accel.sh@42 -- # jq -r . 00:04:57.085 -x option must be non-negative. 00:04:57.085 [2024-07-13 06:00:03.333987] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:57.085 accel_perf options: 00:04:57.085 [-h help message] 00:04:57.085 [-q queue depth per core] 00:04:57.085 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:57.085 [-T number of threads per core 00:04:57.085 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:57.085 [-t time in seconds] 00:04:57.085 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:57.085 [ dif_verify, , dif_generate, dif_generate_copy 00:04:57.085 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:57.085 [-l for compress/decompress workloads, name of uncompressed input file 00:04:57.085 [-S for crc32c workload, use this seed value (default 0) 00:04:57.085 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:57.085 [-f for fill workload, use this BYTE value (default 255) 00:04:57.085 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:57.085 [-y verify result if this switch is on] 00:04:57.085 [-a tasks to allocate per core (default: same value as -q)] 00:04:57.085 Can be used to spread operations across a wider range of memory. 00:04:57.085 06:00:03 -- common/autotest_common.sh@643 -- # es=1 00:04:57.085 06:00:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:57.085 06:00:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:57.085 06:00:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:57.085 00:04:57.085 real 0m0.021s 00:04:57.085 user 0m0.009s 00:04:57.085 sys 0m0.012s 00:04:57.085 06:00:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.085 06:00:03 -- common/autotest_common.sh@10 -- # set +x 00:04:57.085 ************************************ 00:04:57.085 END TEST accel_negative_buffers 00:04:57.085 ************************************ 00:04:57.085 Error: writing output failed: Broken pipe 00:04:57.085 06:00:03 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:57.085 06:00:03 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:04:57.085 06:00:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.085 06:00:03 -- common/autotest_common.sh@10 -- # set +x 00:04:57.085 ************************************ 00:04:57.085 START TEST accel_crc32c 00:04:57.085 ************************************ 00:04:57.085 06:00:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:57.085 06:00:03 -- accel/accel.sh@16 -- # local accel_opc 00:04:57.085 06:00:03 -- accel/accel.sh@17 -- # local accel_module 00:04:57.085 06:00:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:57.085 06:00:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:57.085 06:00:03 -- accel/accel.sh@12 -- # build_accel_config 00:04:57.085 06:00:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:57.085 06:00:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.085 06:00:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.085 06:00:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:57.085 06:00:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:57.085 06:00:03 -- accel/accel.sh@41 -- # local IFS=, 00:04:57.085 06:00:03 -- accel/accel.sh@42 -- # jq -r . 00:04:57.085 [2024-07-13 06:00:03.383470] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:57.085 [2024-07-13 06:00:03.383552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1002994 ] 00:04:57.085 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.085 [2024-07-13 06:00:03.448617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.085 [2024-07-13 06:00:03.565668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.459 06:00:04 -- accel/accel.sh@18 -- # out=' 00:04:58.459 SPDK Configuration: 00:04:58.459 Core mask: 0x1 00:04:58.459 00:04:58.459 Accel Perf Configuration: 00:04:58.459 Workload Type: crc32c 00:04:58.459 CRC-32C seed: 32 00:04:58.459 Transfer size: 4096 bytes 00:04:58.459 Vector count 1 00:04:58.460 Module: software 00:04:58.460 Queue depth: 32 00:04:58.460 Allocate depth: 32 00:04:58.460 # threads/core: 1 00:04:58.460 Run time: 1 seconds 00:04:58.460 Verify: Yes 00:04:58.460 00:04:58.460 Running for 1 seconds... 00:04:58.460 00:04:58.460 Core,Thread Transfers Bandwidth Failed Miscompares 00:04:58.460 ------------------------------------------------------------------------------------ 00:04:58.460 0,0 405472/s 1583 MiB/s 0 0 00:04:58.460 ==================================================================================== 00:04:58.460 Total 405472/s 1583 MiB/s 0 0' 00:04:58.460 06:00:04 -- accel/accel.sh@20 -- # IFS=: 00:04:58.460 06:00:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:58.460 06:00:04 -- accel/accel.sh@20 -- # read -r var val 00:04:58.460 06:00:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:58.460 06:00:04 -- accel/accel.sh@12 -- # build_accel_config 00:04:58.460 06:00:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:04:58.460 06:00:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:58.460 06:00:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:58.460 06:00:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:04:58.460 06:00:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:04:58.460 06:00:04 -- accel/accel.sh@41 -- # local IFS=, 00:04:58.460 06:00:04 -- accel/accel.sh@42 -- # jq -r . 00:04:58.460 [2024-07-13 06:00:04.862340] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:58.460 [2024-07-13 06:00:04.862421] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1003132 ] 00:04:58.460 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.460 [2024-07-13 06:00:04.925404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.717 [2024-07-13 06:00:05.043053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.717 06:00:05 -- accel/accel.sh@21 -- # val= 00:04:58.717 06:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # IFS=: 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # read -r var val 00:04:58.717 06:00:05 -- accel/accel.sh@21 -- # val= 00:04:58.717 06:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # IFS=: 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # read -r var val 00:04:58.717 06:00:05 -- accel/accel.sh@21 -- # val=0x1 00:04:58.717 06:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # IFS=: 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # read -r var val 00:04:58.717 06:00:05 -- accel/accel.sh@21 -- # val= 00:04:58.717 06:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # IFS=: 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # read -r var val 00:04:58.717 06:00:05 -- accel/accel.sh@21 -- # val= 00:04:58.717 06:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # IFS=: 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # read -r var val 00:04:58.717 06:00:05 -- accel/accel.sh@21 -- # val=crc32c 00:04:58.717 06:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.717 06:00:05 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # IFS=: 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # read -r var val 00:04:58.717 06:00:05 -- accel/accel.sh@21 -- # val=32 00:04:58.717 06:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # IFS=: 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # read -r var val 00:04:58.717 06:00:05 -- accel/accel.sh@21 -- # val='4096 bytes' 00:04:58.717 06:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # IFS=: 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # read -r var val 00:04:58.717 06:00:05 -- accel/accel.sh@21 -- # val= 00:04:58.717 06:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # IFS=: 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # read -r var val 00:04:58.717 06:00:05 -- accel/accel.sh@21 -- # val=software 00:04:58.717 06:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.717 06:00:05 -- accel/accel.sh@23 -- # accel_module=software 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # IFS=: 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # read -r var val 00:04:58.717 06:00:05 -- accel/accel.sh@21 -- # val=32 00:04:58.717 06:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # IFS=: 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # read -r var val 00:04:58.717 06:00:05 -- accel/accel.sh@21 -- # val=32 00:04:58.717 06:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # IFS=: 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # read -r var val 00:04:58.717 06:00:05 -- accel/accel.sh@21 -- # val=1 00:04:58.717 06:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # IFS=: 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # read -r var val 00:04:58.717 06:00:05 -- accel/accel.sh@21 -- # val='1 seconds' 00:04:58.717 06:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # IFS=: 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # read -r var val 00:04:58.717 06:00:05 -- accel/accel.sh@21 -- # val=Yes 00:04:58.717 06:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # IFS=: 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # read -r var val 00:04:58.717 06:00:05 -- accel/accel.sh@21 -- # val= 00:04:58.717 06:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # IFS=: 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # read -r var val 00:04:58.717 06:00:05 -- accel/accel.sh@21 -- # val= 00:04:58.717 06:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # IFS=: 00:04:58.717 06:00:05 -- accel/accel.sh@20 -- # read -r var val 00:05:00.086 06:00:06 -- accel/accel.sh@21 -- # val= 00:05:00.086 06:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.086 06:00:06 -- accel/accel.sh@20 -- # IFS=: 00:05:00.086 06:00:06 -- accel/accel.sh@20 -- # read -r var val 00:05:00.086 06:00:06 -- accel/accel.sh@21 -- # val= 00:05:00.086 06:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.086 06:00:06 -- accel/accel.sh@20 -- # IFS=: 00:05:00.086 06:00:06 -- accel/accel.sh@20 -- # read -r var val 00:05:00.086 06:00:06 -- accel/accel.sh@21 -- # val= 00:05:00.086 06:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.086 06:00:06 -- accel/accel.sh@20 -- # IFS=: 00:05:00.086 06:00:06 -- accel/accel.sh@20 -- # read -r var val 00:05:00.086 06:00:06 -- accel/accel.sh@21 -- # val= 00:05:00.086 06:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.086 06:00:06 -- accel/accel.sh@20 -- # IFS=: 00:05:00.086 06:00:06 -- accel/accel.sh@20 -- # read -r var val 00:05:00.086 06:00:06 -- accel/accel.sh@21 -- # val= 00:05:00.086 06:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.086 06:00:06 -- accel/accel.sh@20 -- # IFS=: 00:05:00.086 06:00:06 -- accel/accel.sh@20 -- # read -r var val 00:05:00.086 06:00:06 -- accel/accel.sh@21 -- # val= 00:05:00.086 06:00:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:00.086 06:00:06 -- accel/accel.sh@20 -- # IFS=: 00:05:00.086 06:00:06 -- accel/accel.sh@20 -- # read -r var val 00:05:00.086 06:00:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:00.086 06:00:06 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:00.086 06:00:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:00.086 00:05:00.086 real 0m2.948s 00:05:00.086 user 0m2.626s 00:05:00.086 sys 0m0.312s 00:05:00.086 06:00:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.086 06:00:06 -- common/autotest_common.sh@10 -- # set +x 00:05:00.086 ************************************ 00:05:00.086 END TEST accel_crc32c 00:05:00.086 ************************************ 00:05:00.086 06:00:06 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:00.086 06:00:06 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:00.086 06:00:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:00.086 06:00:06 -- common/autotest_common.sh@10 -- # set +x 00:05:00.086 ************************************ 00:05:00.086 START TEST accel_crc32c_C2 00:05:00.086 ************************************ 00:05:00.086 06:00:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:00.086 06:00:06 -- accel/accel.sh@16 -- # local accel_opc 00:05:00.086 06:00:06 -- accel/accel.sh@17 -- # local accel_module 00:05:00.086 06:00:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:00.086 06:00:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:00.086 06:00:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:00.086 06:00:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:00.086 06:00:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:00.086 06:00:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:00.086 06:00:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:00.086 06:00:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:00.086 06:00:06 -- accel/accel.sh@41 -- # local IFS=, 00:05:00.086 06:00:06 -- accel/accel.sh@42 -- # jq -r . 00:05:00.086 [2024-07-13 06:00:06.351689] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:00.086 [2024-07-13 06:00:06.351762] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1003412 ] 00:05:00.086 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.086 [2024-07-13 06:00:06.413010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.086 [2024-07-13 06:00:06.530910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.455 06:00:07 -- accel/accel.sh@18 -- # out=' 00:05:01.455 SPDK Configuration: 00:05:01.455 Core mask: 0x1 00:05:01.455 00:05:01.455 Accel Perf Configuration: 00:05:01.455 Workload Type: crc32c 00:05:01.455 CRC-32C seed: 0 00:05:01.455 Transfer size: 4096 bytes 00:05:01.455 Vector count 2 00:05:01.455 Module: software 00:05:01.455 Queue depth: 32 00:05:01.455 Allocate depth: 32 00:05:01.455 # threads/core: 1 00:05:01.455 Run time: 1 seconds 00:05:01.455 Verify: Yes 00:05:01.455 00:05:01.455 Running for 1 seconds... 00:05:01.455 00:05:01.455 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:01.455 ------------------------------------------------------------------------------------ 00:05:01.455 0,0 311232/s 2431 MiB/s 0 0 00:05:01.455 ==================================================================================== 00:05:01.455 Total 311232/s 1215 MiB/s 0 0' 00:05:01.455 06:00:07 -- accel/accel.sh@20 -- # IFS=: 00:05:01.455 06:00:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:01.455 06:00:07 -- accel/accel.sh@20 -- # read -r var val 00:05:01.455 06:00:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:01.455 06:00:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:01.455 06:00:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:01.455 06:00:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.455 06:00:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.455 06:00:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:01.455 06:00:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:01.455 06:00:07 -- accel/accel.sh@41 -- # local IFS=, 00:05:01.455 06:00:07 -- accel/accel.sh@42 -- # jq -r . 00:05:01.455 [2024-07-13 06:00:07.809841] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:01.455 [2024-07-13 06:00:07.809980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1003805 ] 00:05:01.455 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.455 [2024-07-13 06:00:07.874105] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.722 [2024-07-13 06:00:07.992626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.722 06:00:08 -- accel/accel.sh@21 -- # val= 00:05:01.722 06:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:01.722 06:00:08 -- accel/accel.sh@21 -- # val= 00:05:01.722 06:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:01.722 06:00:08 -- accel/accel.sh@21 -- # val=0x1 00:05:01.722 06:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:01.722 06:00:08 -- accel/accel.sh@21 -- # val= 00:05:01.722 06:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:01.722 06:00:08 -- accel/accel.sh@21 -- # val= 00:05:01.722 06:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:01.722 06:00:08 -- accel/accel.sh@21 -- # val=crc32c 00:05:01.722 06:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.722 06:00:08 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:01.722 06:00:08 -- accel/accel.sh@21 -- # val=0 00:05:01.722 06:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:01.722 06:00:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:01.722 06:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:01.722 06:00:08 -- accel/accel.sh@21 -- # val= 00:05:01.722 06:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:01.722 06:00:08 -- accel/accel.sh@21 -- # val=software 00:05:01.722 06:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.722 06:00:08 -- accel/accel.sh@23 -- # accel_module=software 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:01.722 06:00:08 -- accel/accel.sh@21 -- # val=32 00:05:01.722 06:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:01.722 06:00:08 -- accel/accel.sh@21 -- # val=32 00:05:01.722 06:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:01.722 06:00:08 -- accel/accel.sh@21 -- # val=1 00:05:01.722 06:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:01.722 06:00:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:01.722 06:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:01.722 06:00:08 -- accel/accel.sh@21 -- # val=Yes 00:05:01.722 06:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:01.722 06:00:08 -- accel/accel.sh@21 -- # val= 00:05:01.722 06:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:01.722 06:00:08 -- accel/accel.sh@21 -- # val= 00:05:01.722 06:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # IFS=: 00:05:01.722 06:00:08 -- accel/accel.sh@20 -- # read -r var val 00:05:03.102 06:00:09 -- accel/accel.sh@21 -- # val= 00:05:03.102 06:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.102 06:00:09 -- accel/accel.sh@20 -- # IFS=: 00:05:03.102 06:00:09 -- accel/accel.sh@20 -- # read -r var val 00:05:03.102 06:00:09 -- accel/accel.sh@21 -- # val= 00:05:03.102 06:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.102 06:00:09 -- accel/accel.sh@20 -- # IFS=: 00:05:03.102 06:00:09 -- accel/accel.sh@20 -- # read -r var val 00:05:03.102 06:00:09 -- accel/accel.sh@21 -- # val= 00:05:03.102 06:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.102 06:00:09 -- accel/accel.sh@20 -- # IFS=: 00:05:03.102 06:00:09 -- accel/accel.sh@20 -- # read -r var val 00:05:03.102 06:00:09 -- accel/accel.sh@21 -- # val= 00:05:03.102 06:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.102 06:00:09 -- accel/accel.sh@20 -- # IFS=: 00:05:03.102 06:00:09 -- accel/accel.sh@20 -- # read -r var val 00:05:03.102 06:00:09 -- accel/accel.sh@21 -- # val= 00:05:03.102 06:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.102 06:00:09 -- accel/accel.sh@20 -- # IFS=: 00:05:03.102 06:00:09 -- accel/accel.sh@20 -- # read -r var val 00:05:03.102 06:00:09 -- accel/accel.sh@21 -- # val= 00:05:03.102 06:00:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:03.102 06:00:09 -- accel/accel.sh@20 -- # IFS=: 00:05:03.102 06:00:09 -- accel/accel.sh@20 -- # read -r var val 00:05:03.102 06:00:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:03.102 06:00:09 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:03.102 06:00:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:03.102 00:05:03.102 real 0m2.914s 00:05:03.102 user 0m2.610s 00:05:03.102 sys 0m0.293s 00:05:03.102 06:00:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.102 06:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:03.102 ************************************ 00:05:03.102 END TEST accel_crc32c_C2 00:05:03.102 ************************************ 00:05:03.102 06:00:09 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:03.102 06:00:09 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:03.102 06:00:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.102 06:00:09 -- common/autotest_common.sh@10 -- # set +x 00:05:03.102 ************************************ 00:05:03.102 START TEST accel_copy 00:05:03.102 ************************************ 00:05:03.102 06:00:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:05:03.102 06:00:09 -- accel/accel.sh@16 -- # local accel_opc 00:05:03.102 06:00:09 -- accel/accel.sh@17 -- # local accel_module 00:05:03.102 06:00:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:03.102 06:00:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:03.102 06:00:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:03.103 06:00:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:03.103 06:00:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.103 06:00:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.103 06:00:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:03.103 06:00:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:03.103 06:00:09 -- accel/accel.sh@41 -- # local IFS=, 00:05:03.103 06:00:09 -- accel/accel.sh@42 -- # jq -r . 00:05:03.103 [2024-07-13 06:00:09.288641] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:03.103 [2024-07-13 06:00:09.288713] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1004218 ] 00:05:03.103 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.103 [2024-07-13 06:00:09.350669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.103 [2024-07-13 06:00:09.466694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.473 06:00:10 -- accel/accel.sh@18 -- # out=' 00:05:04.473 SPDK Configuration: 00:05:04.473 Core mask: 0x1 00:05:04.473 00:05:04.473 Accel Perf Configuration: 00:05:04.473 Workload Type: copy 00:05:04.473 Transfer size: 4096 bytes 00:05:04.473 Vector count 1 00:05:04.473 Module: software 00:05:04.473 Queue depth: 32 00:05:04.473 Allocate depth: 32 00:05:04.473 # threads/core: 1 00:05:04.473 Run time: 1 seconds 00:05:04.473 Verify: Yes 00:05:04.473 00:05:04.473 Running for 1 seconds... 00:05:04.473 00:05:04.473 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:04.473 ------------------------------------------------------------------------------------ 00:05:04.473 0,0 278432/s 1087 MiB/s 0 0 00:05:04.473 ==================================================================================== 00:05:04.473 Total 278432/s 1087 MiB/s 0 0' 00:05:04.473 06:00:10 -- accel/accel.sh@20 -- # IFS=: 00:05:04.473 06:00:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:04.473 06:00:10 -- accel/accel.sh@20 -- # read -r var val 00:05:04.473 06:00:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:04.473 06:00:10 -- accel/accel.sh@12 -- # build_accel_config 00:05:04.473 06:00:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:04.473 06:00:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.473 06:00:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.473 06:00:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:04.473 06:00:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:04.473 06:00:10 -- accel/accel.sh@41 -- # local IFS=, 00:05:04.473 06:00:10 -- accel/accel.sh@42 -- # jq -r . 00:05:04.473 [2024-07-13 06:00:10.760223] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:04.473 [2024-07-13 06:00:10.760298] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1004437 ] 00:05:04.473 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.473 [2024-07-13 06:00:10.823916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.473 [2024-07-13 06:00:10.940925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.730 06:00:10 -- accel/accel.sh@21 -- # val= 00:05:04.730 06:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.730 06:00:10 -- accel/accel.sh@20 -- # IFS=: 00:05:04.730 06:00:10 -- accel/accel.sh@20 -- # read -r var val 00:05:04.730 06:00:11 -- accel/accel.sh@21 -- # val= 00:05:04.730 06:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.730 06:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:04.730 06:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:04.730 06:00:11 -- accel/accel.sh@21 -- # val=0x1 00:05:04.730 06:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.730 06:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:04.730 06:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:04.730 06:00:11 -- accel/accel.sh@21 -- # val= 00:05:04.730 06:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.730 06:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:04.730 06:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:04.730 06:00:11 -- accel/accel.sh@21 -- # val= 00:05:04.730 06:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.730 06:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:04.730 06:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:04.730 06:00:11 -- accel/accel.sh@21 -- # val=copy 00:05:04.730 06:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.730 06:00:11 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:04.730 06:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:04.730 06:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:04.730 06:00:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:04.730 06:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.730 06:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:04.730 06:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:04.730 06:00:11 -- accel/accel.sh@21 -- # val= 00:05:04.730 06:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.730 06:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:04.730 06:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:04.730 06:00:11 -- accel/accel.sh@21 -- # val=software 00:05:04.730 06:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.731 06:00:11 -- accel/accel.sh@23 -- # accel_module=software 00:05:04.731 06:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:04.731 06:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:04.731 06:00:11 -- accel/accel.sh@21 -- # val=32 00:05:04.731 06:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.731 06:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:04.731 06:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:04.731 06:00:11 -- accel/accel.sh@21 -- # val=32 00:05:04.731 06:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.731 06:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:04.731 06:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:04.731 06:00:11 -- accel/accel.sh@21 -- # val=1 00:05:04.731 06:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.731 06:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:04.731 06:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:04.731 06:00:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:04.731 06:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.731 06:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:04.731 06:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:04.731 06:00:11 -- accel/accel.sh@21 -- # val=Yes 00:05:04.731 06:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.731 06:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:04.731 06:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:04.731 06:00:11 -- accel/accel.sh@21 -- # val= 00:05:04.731 06:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.731 06:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:04.731 06:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:04.731 06:00:11 -- accel/accel.sh@21 -- # val= 00:05:04.731 06:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:05:04.731 06:00:11 -- accel/accel.sh@20 -- # IFS=: 00:05:04.731 06:00:11 -- accel/accel.sh@20 -- # read -r var val 00:05:06.101 06:00:12 -- accel/accel.sh@21 -- # val= 00:05:06.101 06:00:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:06.101 06:00:12 -- accel/accel.sh@20 -- # IFS=: 00:05:06.101 06:00:12 -- accel/accel.sh@20 -- # read -r var val 00:05:06.101 06:00:12 -- accel/accel.sh@21 -- # val= 00:05:06.101 06:00:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:06.101 06:00:12 -- accel/accel.sh@20 -- # IFS=: 00:05:06.101 06:00:12 -- accel/accel.sh@20 -- # read -r var val 00:05:06.101 06:00:12 -- accel/accel.sh@21 -- # val= 00:05:06.101 06:00:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:06.101 06:00:12 -- accel/accel.sh@20 -- # IFS=: 00:05:06.101 06:00:12 -- accel/accel.sh@20 -- # read -r var val 00:05:06.101 06:00:12 -- accel/accel.sh@21 -- # val= 00:05:06.101 06:00:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:06.101 06:00:12 -- accel/accel.sh@20 -- # IFS=: 00:05:06.101 06:00:12 -- accel/accel.sh@20 -- # read -r var val 00:05:06.101 06:00:12 -- accel/accel.sh@21 -- # val= 00:05:06.101 06:00:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:06.101 06:00:12 -- accel/accel.sh@20 -- # IFS=: 00:05:06.101 06:00:12 -- accel/accel.sh@20 -- # read -r var val 00:05:06.101 06:00:12 -- accel/accel.sh@21 -- # val= 00:05:06.101 06:00:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:06.101 06:00:12 -- accel/accel.sh@20 -- # IFS=: 00:05:06.101 06:00:12 -- accel/accel.sh@20 -- # read -r var val 00:05:06.101 06:00:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:06.101 06:00:12 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:06.101 06:00:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:06.101 00:05:06.101 real 0m2.941s 00:05:06.101 user 0m2.647s 00:05:06.101 sys 0m0.284s 00:05:06.101 06:00:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.101 06:00:12 -- common/autotest_common.sh@10 -- # set +x 00:05:06.101 ************************************ 00:05:06.101 END TEST accel_copy 00:05:06.101 ************************************ 00:05:06.101 06:00:12 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:06.101 06:00:12 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:06.101 06:00:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.101 06:00:12 -- common/autotest_common.sh@10 -- # set +x 00:05:06.101 ************************************ 00:05:06.101 START TEST accel_fill 00:05:06.101 ************************************ 00:05:06.101 06:00:12 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:06.101 06:00:12 -- accel/accel.sh@16 -- # local accel_opc 00:05:06.101 06:00:12 -- accel/accel.sh@17 -- # local accel_module 00:05:06.101 06:00:12 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:06.101 06:00:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:06.101 06:00:12 -- accel/accel.sh@12 -- # build_accel_config 00:05:06.101 06:00:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:06.101 06:00:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.101 06:00:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.101 06:00:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:06.101 06:00:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:06.101 06:00:12 -- accel/accel.sh@41 -- # local IFS=, 00:05:06.101 06:00:12 -- accel/accel.sh@42 -- # jq -r . 00:05:06.101 [2024-07-13 06:00:12.250429] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:06.101 [2024-07-13 06:00:12.250500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1004646 ] 00:05:06.101 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.101 [2024-07-13 06:00:12.310534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.101 [2024-07-13 06:00:12.429606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.471 06:00:13 -- accel/accel.sh@18 -- # out=' 00:05:07.471 SPDK Configuration: 00:05:07.471 Core mask: 0x1 00:05:07.471 00:05:07.471 Accel Perf Configuration: 00:05:07.471 Workload Type: fill 00:05:07.471 Fill pattern: 0x80 00:05:07.471 Transfer size: 4096 bytes 00:05:07.471 Vector count 1 00:05:07.471 Module: software 00:05:07.471 Queue depth: 64 00:05:07.471 Allocate depth: 64 00:05:07.471 # threads/core: 1 00:05:07.471 Run time: 1 seconds 00:05:07.471 Verify: Yes 00:05:07.471 00:05:07.471 Running for 1 seconds... 00:05:07.471 00:05:07.471 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:07.471 ------------------------------------------------------------------------------------ 00:05:07.471 0,0 404800/s 1581 MiB/s 0 0 00:05:07.471 ==================================================================================== 00:05:07.471 Total 404800/s 1581 MiB/s 0 0' 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:07.471 06:00:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:07.471 06:00:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:07.471 06:00:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.471 06:00:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:07.471 06:00:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.471 06:00:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.471 06:00:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:07.471 06:00:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:07.471 06:00:13 -- accel/accel.sh@41 -- # local IFS=, 00:05:07.471 06:00:13 -- accel/accel.sh@42 -- # jq -r . 00:05:07.471 [2024-07-13 06:00:13.715605] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:07.471 [2024-07-13 06:00:13.715686] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1004795 ] 00:05:07.471 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.471 [2024-07-13 06:00:13.777760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.471 [2024-07-13 06:00:13.894222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.471 06:00:13 -- accel/accel.sh@21 -- # val= 00:05:07.471 06:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:07.471 06:00:13 -- accel/accel.sh@21 -- # val= 00:05:07.471 06:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:07.471 06:00:13 -- accel/accel.sh@21 -- # val=0x1 00:05:07.471 06:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:07.471 06:00:13 -- accel/accel.sh@21 -- # val= 00:05:07.471 06:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:07.471 06:00:13 -- accel/accel.sh@21 -- # val= 00:05:07.471 06:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:07.471 06:00:13 -- accel/accel.sh@21 -- # val=fill 00:05:07.471 06:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.471 06:00:13 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:07.471 06:00:13 -- accel/accel.sh@21 -- # val=0x80 00:05:07.471 06:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:07.471 06:00:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:07.471 06:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:07.471 06:00:13 -- accel/accel.sh@21 -- # val= 00:05:07.471 06:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:07.471 06:00:13 -- accel/accel.sh@21 -- # val=software 00:05:07.471 06:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.471 06:00:13 -- accel/accel.sh@23 -- # accel_module=software 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:07.471 06:00:13 -- accel/accel.sh@21 -- # val=64 00:05:07.471 06:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:07.471 06:00:13 -- accel/accel.sh@21 -- # val=64 00:05:07.471 06:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:07.471 06:00:13 -- accel/accel.sh@21 -- # val=1 00:05:07.471 06:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:07.471 06:00:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:07.471 06:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:07.471 06:00:13 -- accel/accel.sh@21 -- # val=Yes 00:05:07.471 06:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:07.471 06:00:13 -- accel/accel.sh@21 -- # val= 00:05:07.471 06:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:07.471 06:00:13 -- accel/accel.sh@21 -- # val= 00:05:07.471 06:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # IFS=: 00:05:07.471 06:00:13 -- accel/accel.sh@20 -- # read -r var val 00:05:08.840 06:00:15 -- accel/accel.sh@21 -- # val= 00:05:08.840 06:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.840 06:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:08.840 06:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:08.840 06:00:15 -- accel/accel.sh@21 -- # val= 00:05:08.840 06:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.840 06:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:08.840 06:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:08.841 06:00:15 -- accel/accel.sh@21 -- # val= 00:05:08.841 06:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.841 06:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:08.841 06:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:08.841 06:00:15 -- accel/accel.sh@21 -- # val= 00:05:08.841 06:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.841 06:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:08.841 06:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:08.841 06:00:15 -- accel/accel.sh@21 -- # val= 00:05:08.841 06:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.841 06:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:08.841 06:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:08.841 06:00:15 -- accel/accel.sh@21 -- # val= 00:05:08.841 06:00:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:08.841 06:00:15 -- accel/accel.sh@20 -- # IFS=: 00:05:08.841 06:00:15 -- accel/accel.sh@20 -- # read -r var val 00:05:08.841 06:00:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:08.841 06:00:15 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:08.841 06:00:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:08.841 00:05:08.841 real 0m2.935s 00:05:08.841 user 0m2.641s 00:05:08.841 sys 0m0.285s 00:05:08.841 06:00:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.841 06:00:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.841 ************************************ 00:05:08.841 END TEST accel_fill 00:05:08.841 ************************************ 00:05:08.841 06:00:15 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:08.841 06:00:15 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:08.841 06:00:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:08.841 06:00:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.841 ************************************ 00:05:08.841 START TEST accel_copy_crc32c 00:05:08.841 ************************************ 00:05:08.841 06:00:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:05:08.841 06:00:15 -- accel/accel.sh@16 -- # local accel_opc 00:05:08.841 06:00:15 -- accel/accel.sh@17 -- # local accel_module 00:05:08.841 06:00:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:08.841 06:00:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:08.841 06:00:15 -- accel/accel.sh@12 -- # build_accel_config 00:05:08.841 06:00:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:08.841 06:00:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.841 06:00:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.841 06:00:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:08.841 06:00:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:08.841 06:00:15 -- accel/accel.sh@41 -- # local IFS=, 00:05:08.841 06:00:15 -- accel/accel.sh@42 -- # jq -r . 00:05:08.841 [2024-07-13 06:00:15.212659] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:08.841 [2024-07-13 06:00:15.212737] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1004956 ] 00:05:08.841 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.841 [2024-07-13 06:00:15.275327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.098 [2024-07-13 06:00:15.393637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.468 06:00:16 -- accel/accel.sh@18 -- # out=' 00:05:10.468 SPDK Configuration: 00:05:10.468 Core mask: 0x1 00:05:10.468 00:05:10.468 Accel Perf Configuration: 00:05:10.468 Workload Type: copy_crc32c 00:05:10.468 CRC-32C seed: 0 00:05:10.468 Vector size: 4096 bytes 00:05:10.468 Transfer size: 4096 bytes 00:05:10.468 Vector count 1 00:05:10.468 Module: software 00:05:10.468 Queue depth: 32 00:05:10.468 Allocate depth: 32 00:05:10.468 # threads/core: 1 00:05:10.468 Run time: 1 seconds 00:05:10.468 Verify: Yes 00:05:10.468 00:05:10.468 Running for 1 seconds... 00:05:10.468 00:05:10.468 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:10.468 ------------------------------------------------------------------------------------ 00:05:10.468 0,0 217664/s 850 MiB/s 0 0 00:05:10.468 ==================================================================================== 00:05:10.468 Total 217664/s 850 MiB/s 0 0' 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.468 06:00:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.468 06:00:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:10.468 06:00:16 -- accel/accel.sh@12 -- # build_accel_config 00:05:10.468 06:00:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:10.468 06:00:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.468 06:00:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.468 06:00:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:10.468 06:00:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:10.468 06:00:16 -- accel/accel.sh@41 -- # local IFS=, 00:05:10.468 06:00:16 -- accel/accel.sh@42 -- # jq -r . 00:05:10.468 [2024-07-13 06:00:16.683478] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:10.468 [2024-07-13 06:00:16.683555] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1005217 ] 00:05:10.468 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.468 [2024-07-13 06:00:16.744652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.468 [2024-07-13 06:00:16.860171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.468 06:00:16 -- accel/accel.sh@21 -- # val= 00:05:10.468 06:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.468 06:00:16 -- accel/accel.sh@21 -- # val= 00:05:10.468 06:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.468 06:00:16 -- accel/accel.sh@21 -- # val=0x1 00:05:10.468 06:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.468 06:00:16 -- accel/accel.sh@21 -- # val= 00:05:10.468 06:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.468 06:00:16 -- accel/accel.sh@21 -- # val= 00:05:10.468 06:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.468 06:00:16 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:10.468 06:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.468 06:00:16 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.468 06:00:16 -- accel/accel.sh@21 -- # val=0 00:05:10.468 06:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.468 06:00:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:10.468 06:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.468 06:00:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:10.468 06:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.468 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.469 06:00:16 -- accel/accel.sh@21 -- # val= 00:05:10.469 06:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.469 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.469 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.469 06:00:16 -- accel/accel.sh@21 -- # val=software 00:05:10.469 06:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.469 06:00:16 -- accel/accel.sh@23 -- # accel_module=software 00:05:10.469 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.469 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.469 06:00:16 -- accel/accel.sh@21 -- # val=32 00:05:10.469 06:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.469 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.469 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.469 06:00:16 -- accel/accel.sh@21 -- # val=32 00:05:10.469 06:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.469 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.469 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.469 06:00:16 -- accel/accel.sh@21 -- # val=1 00:05:10.469 06:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.469 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.469 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.469 06:00:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:10.469 06:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.469 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.469 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.469 06:00:16 -- accel/accel.sh@21 -- # val=Yes 00:05:10.469 06:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.469 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.469 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.469 06:00:16 -- accel/accel.sh@21 -- # val= 00:05:10.469 06:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.469 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.469 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:10.469 06:00:16 -- accel/accel.sh@21 -- # val= 00:05:10.469 06:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.469 06:00:16 -- accel/accel.sh@20 -- # IFS=: 00:05:10.469 06:00:16 -- accel/accel.sh@20 -- # read -r var val 00:05:11.840 06:00:18 -- accel/accel.sh@21 -- # val= 00:05:11.840 06:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.840 06:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:11.840 06:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:11.840 06:00:18 -- accel/accel.sh@21 -- # val= 00:05:11.840 06:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.840 06:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:11.840 06:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:11.840 06:00:18 -- accel/accel.sh@21 -- # val= 00:05:11.840 06:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.840 06:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:11.840 06:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:11.840 06:00:18 -- accel/accel.sh@21 -- # val= 00:05:11.840 06:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.840 06:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:11.840 06:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:11.840 06:00:18 -- accel/accel.sh@21 -- # val= 00:05:11.840 06:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.840 06:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:11.840 06:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:11.840 06:00:18 -- accel/accel.sh@21 -- # val= 00:05:11.840 06:00:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:11.840 06:00:18 -- accel/accel.sh@20 -- # IFS=: 00:05:11.840 06:00:18 -- accel/accel.sh@20 -- # read -r var val 00:05:11.840 06:00:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:11.840 06:00:18 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:11.840 06:00:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:11.840 00:05:11.840 real 0m2.940s 00:05:11.840 user 0m2.644s 00:05:11.840 sys 0m0.288s 00:05:11.840 06:00:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.840 06:00:18 -- common/autotest_common.sh@10 -- # set +x 00:05:11.840 ************************************ 00:05:11.840 END TEST accel_copy_crc32c 00:05:11.840 ************************************ 00:05:11.840 06:00:18 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:11.840 06:00:18 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:11.840 06:00:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.840 06:00:18 -- common/autotest_common.sh@10 -- # set +x 00:05:11.840 ************************************ 00:05:11.840 START TEST accel_copy_crc32c_C2 00:05:11.840 ************************************ 00:05:11.840 06:00:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:11.840 06:00:18 -- accel/accel.sh@16 -- # local accel_opc 00:05:11.840 06:00:18 -- accel/accel.sh@17 -- # local accel_module 00:05:11.840 06:00:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:11.840 06:00:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:11.840 06:00:18 -- accel/accel.sh@12 -- # build_accel_config 00:05:11.840 06:00:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:11.840 06:00:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.840 06:00:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.840 06:00:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:11.840 06:00:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:11.840 06:00:18 -- accel/accel.sh@41 -- # local IFS=, 00:05:11.840 06:00:18 -- accel/accel.sh@42 -- # jq -r . 00:05:11.840 [2024-07-13 06:00:18.177270] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:11.840 [2024-07-13 06:00:18.177337] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1005375 ] 00:05:11.840 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.840 [2024-07-13 06:00:18.238189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.098 [2024-07-13 06:00:18.356133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.468 06:00:19 -- accel/accel.sh@18 -- # out=' 00:05:13.468 SPDK Configuration: 00:05:13.468 Core mask: 0x1 00:05:13.468 00:05:13.468 Accel Perf Configuration: 00:05:13.468 Workload Type: copy_crc32c 00:05:13.468 CRC-32C seed: 0 00:05:13.468 Vector size: 4096 bytes 00:05:13.468 Transfer size: 8192 bytes 00:05:13.468 Vector count 2 00:05:13.468 Module: software 00:05:13.468 Queue depth: 32 00:05:13.468 Allocate depth: 32 00:05:13.468 # threads/core: 1 00:05:13.468 Run time: 1 seconds 00:05:13.468 Verify: Yes 00:05:13.468 00:05:13.468 Running for 1 seconds... 00:05:13.468 00:05:13.468 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:13.468 ------------------------------------------------------------------------------------ 00:05:13.468 0,0 153632/s 1200 MiB/s 0 0 00:05:13.468 ==================================================================================== 00:05:13.468 Total 153632/s 600 MiB/s 0 0' 00:05:13.468 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.468 06:00:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:13.468 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.468 06:00:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:13.469 06:00:19 -- accel/accel.sh@12 -- # build_accel_config 00:05:13.469 06:00:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:13.469 06:00:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.469 06:00:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.469 06:00:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:13.469 06:00:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:13.469 06:00:19 -- accel/accel.sh@41 -- # local IFS=, 00:05:13.469 06:00:19 -- accel/accel.sh@42 -- # jq -r . 00:05:13.469 [2024-07-13 06:00:19.652329] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:13.469 [2024-07-13 06:00:19.652408] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1005522 ] 00:05:13.469 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.469 [2024-07-13 06:00:19.713359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.469 [2024-07-13 06:00:19.828135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.469 06:00:19 -- accel/accel.sh@21 -- # val= 00:05:13.469 06:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.469 06:00:19 -- accel/accel.sh@21 -- # val= 00:05:13.469 06:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.469 06:00:19 -- accel/accel.sh@21 -- # val=0x1 00:05:13.469 06:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.469 06:00:19 -- accel/accel.sh@21 -- # val= 00:05:13.469 06:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.469 06:00:19 -- accel/accel.sh@21 -- # val= 00:05:13.469 06:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.469 06:00:19 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:13.469 06:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.469 06:00:19 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.469 06:00:19 -- accel/accel.sh@21 -- # val=0 00:05:13.469 06:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.469 06:00:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:13.469 06:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.469 06:00:19 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:13.469 06:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.469 06:00:19 -- accel/accel.sh@21 -- # val= 00:05:13.469 06:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.469 06:00:19 -- accel/accel.sh@21 -- # val=software 00:05:13.469 06:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.469 06:00:19 -- accel/accel.sh@23 -- # accel_module=software 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.469 06:00:19 -- accel/accel.sh@21 -- # val=32 00:05:13.469 06:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.469 06:00:19 -- accel/accel.sh@21 -- # val=32 00:05:13.469 06:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.469 06:00:19 -- accel/accel.sh@21 -- # val=1 00:05:13.469 06:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.469 06:00:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:13.469 06:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.469 06:00:19 -- accel/accel.sh@21 -- # val=Yes 00:05:13.469 06:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.469 06:00:19 -- accel/accel.sh@21 -- # val= 00:05:13.469 06:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:13.469 06:00:19 -- accel/accel.sh@21 -- # val= 00:05:13.469 06:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # IFS=: 00:05:13.469 06:00:19 -- accel/accel.sh@20 -- # read -r var val 00:05:14.842 06:00:21 -- accel/accel.sh@21 -- # val= 00:05:14.842 06:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.842 06:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:14.842 06:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:14.842 06:00:21 -- accel/accel.sh@21 -- # val= 00:05:14.842 06:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.842 06:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:14.842 06:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:14.842 06:00:21 -- accel/accel.sh@21 -- # val= 00:05:14.842 06:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.842 06:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:14.842 06:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:14.842 06:00:21 -- accel/accel.sh@21 -- # val= 00:05:14.842 06:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.842 06:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:14.842 06:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:14.842 06:00:21 -- accel/accel.sh@21 -- # val= 00:05:14.842 06:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.842 06:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:14.842 06:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:14.843 06:00:21 -- accel/accel.sh@21 -- # val= 00:05:14.843 06:00:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:14.843 06:00:21 -- accel/accel.sh@20 -- # IFS=: 00:05:14.843 06:00:21 -- accel/accel.sh@20 -- # read -r var val 00:05:14.843 06:00:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:14.843 06:00:21 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:14.843 06:00:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.843 00:05:14.843 real 0m2.941s 00:05:14.843 user 0m2.644s 00:05:14.843 sys 0m0.288s 00:05:14.843 06:00:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.843 06:00:21 -- common/autotest_common.sh@10 -- # set +x 00:05:14.843 ************************************ 00:05:14.843 END TEST accel_copy_crc32c_C2 00:05:14.843 ************************************ 00:05:14.843 06:00:21 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:14.843 06:00:21 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:14.843 06:00:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.843 06:00:21 -- common/autotest_common.sh@10 -- # set +x 00:05:14.843 ************************************ 00:05:14.843 START TEST accel_dualcast 00:05:14.843 ************************************ 00:05:14.843 06:00:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:05:14.843 06:00:21 -- accel/accel.sh@16 -- # local accel_opc 00:05:14.843 06:00:21 -- accel/accel.sh@17 -- # local accel_module 00:05:14.843 06:00:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:14.843 06:00:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:14.843 06:00:21 -- accel/accel.sh@12 -- # build_accel_config 00:05:14.843 06:00:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:14.843 06:00:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.843 06:00:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.843 06:00:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:14.843 06:00:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:14.843 06:00:21 -- accel/accel.sh@41 -- # local IFS=, 00:05:14.843 06:00:21 -- accel/accel.sh@42 -- # jq -r . 00:05:14.843 [2024-07-13 06:00:21.142256] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:14.843 [2024-07-13 06:00:21.142334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1005796 ] 00:05:14.843 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.843 [2024-07-13 06:00:21.204954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.843 [2024-07-13 06:00:21.318673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.223 06:00:22 -- accel/accel.sh@18 -- # out=' 00:05:16.223 SPDK Configuration: 00:05:16.223 Core mask: 0x1 00:05:16.223 00:05:16.223 Accel Perf Configuration: 00:05:16.223 Workload Type: dualcast 00:05:16.223 Transfer size: 4096 bytes 00:05:16.223 Vector count 1 00:05:16.223 Module: software 00:05:16.223 Queue depth: 32 00:05:16.223 Allocate depth: 32 00:05:16.223 # threads/core: 1 00:05:16.223 Run time: 1 seconds 00:05:16.223 Verify: Yes 00:05:16.223 00:05:16.223 Running for 1 seconds... 00:05:16.223 00:05:16.223 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:16.223 ------------------------------------------------------------------------------------ 00:05:16.223 0,0 297824/s 1163 MiB/s 0 0 00:05:16.223 ==================================================================================== 00:05:16.223 Total 297824/s 1163 MiB/s 0 0' 00:05:16.223 06:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:16.223 06:00:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:16.223 06:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:16.223 06:00:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:16.223 06:00:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:16.223 06:00:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:16.223 06:00:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.223 06:00:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.223 06:00:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:16.223 06:00:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:16.223 06:00:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:16.223 06:00:22 -- accel/accel.sh@42 -- # jq -r . 00:05:16.223 [2024-07-13 06:00:22.614333] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:16.224 [2024-07-13 06:00:22.614417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1005945 ] 00:05:16.224 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.224 [2024-07-13 06:00:22.673890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.484 [2024-07-13 06:00:22.792666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.484 06:00:22 -- accel/accel.sh@21 -- # val= 00:05:16.484 06:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:16.484 06:00:22 -- accel/accel.sh@21 -- # val= 00:05:16.484 06:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:16.484 06:00:22 -- accel/accel.sh@21 -- # val=0x1 00:05:16.484 06:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:16.484 06:00:22 -- accel/accel.sh@21 -- # val= 00:05:16.484 06:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:16.484 06:00:22 -- accel/accel.sh@21 -- # val= 00:05:16.484 06:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:16.484 06:00:22 -- accel/accel.sh@21 -- # val=dualcast 00:05:16.484 06:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.484 06:00:22 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:16.484 06:00:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:16.484 06:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:16.484 06:00:22 -- accel/accel.sh@21 -- # val= 00:05:16.484 06:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:16.484 06:00:22 -- accel/accel.sh@21 -- # val=software 00:05:16.484 06:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.484 06:00:22 -- accel/accel.sh@23 -- # accel_module=software 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:16.484 06:00:22 -- accel/accel.sh@21 -- # val=32 00:05:16.484 06:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:16.484 06:00:22 -- accel/accel.sh@21 -- # val=32 00:05:16.484 06:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:16.484 06:00:22 -- accel/accel.sh@21 -- # val=1 00:05:16.484 06:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:16.484 06:00:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:16.484 06:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:16.484 06:00:22 -- accel/accel.sh@21 -- # val=Yes 00:05:16.484 06:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:16.484 06:00:22 -- accel/accel.sh@21 -- # val= 00:05:16.484 06:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:16.484 06:00:22 -- accel/accel.sh@21 -- # val= 00:05:16.484 06:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # IFS=: 00:05:16.484 06:00:22 -- accel/accel.sh@20 -- # read -r var val 00:05:17.857 06:00:24 -- accel/accel.sh@21 -- # val= 00:05:17.857 06:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.857 06:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:17.857 06:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:17.857 06:00:24 -- accel/accel.sh@21 -- # val= 00:05:17.857 06:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.857 06:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:17.857 06:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:17.857 06:00:24 -- accel/accel.sh@21 -- # val= 00:05:17.857 06:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.857 06:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:17.857 06:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:17.857 06:00:24 -- accel/accel.sh@21 -- # val= 00:05:17.857 06:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.857 06:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:17.857 06:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:17.857 06:00:24 -- accel/accel.sh@21 -- # val= 00:05:17.857 06:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.857 06:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:17.857 06:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:17.857 06:00:24 -- accel/accel.sh@21 -- # val= 00:05:17.857 06:00:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:17.857 06:00:24 -- accel/accel.sh@20 -- # IFS=: 00:05:17.857 06:00:24 -- accel/accel.sh@20 -- # read -r var val 00:05:17.857 06:00:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:17.857 06:00:24 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:05:17.857 06:00:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:17.857 00:05:17.857 real 0m2.948s 00:05:17.857 user 0m2.652s 00:05:17.857 sys 0m0.287s 00:05:17.857 06:00:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.857 06:00:24 -- common/autotest_common.sh@10 -- # set +x 00:05:17.857 ************************************ 00:05:17.857 END TEST accel_dualcast 00:05:17.857 ************************************ 00:05:17.857 06:00:24 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:17.857 06:00:24 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:17.857 06:00:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.857 06:00:24 -- common/autotest_common.sh@10 -- # set +x 00:05:17.857 ************************************ 00:05:17.857 START TEST accel_compare 00:05:17.857 ************************************ 00:05:17.857 06:00:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:05:17.857 06:00:24 -- accel/accel.sh@16 -- # local accel_opc 00:05:17.857 06:00:24 -- accel/accel.sh@17 -- # local accel_module 00:05:17.857 06:00:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:05:17.857 06:00:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:17.857 06:00:24 -- accel/accel.sh@12 -- # build_accel_config 00:05:17.857 06:00:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:17.857 06:00:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:17.857 06:00:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:17.857 06:00:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:17.857 06:00:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:17.857 06:00:24 -- accel/accel.sh@41 -- # local IFS=, 00:05:17.857 06:00:24 -- accel/accel.sh@42 -- # jq -r . 00:05:17.857 [2024-07-13 06:00:24.119615] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:17.857 [2024-07-13 06:00:24.119705] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006105 ] 00:05:17.857 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.857 [2024-07-13 06:00:24.185107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.857 [2024-07-13 06:00:24.302724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.230 06:00:25 -- accel/accel.sh@18 -- # out=' 00:05:19.230 SPDK Configuration: 00:05:19.230 Core mask: 0x1 00:05:19.230 00:05:19.230 Accel Perf Configuration: 00:05:19.230 Workload Type: compare 00:05:19.230 Transfer size: 4096 bytes 00:05:19.230 Vector count 1 00:05:19.230 Module: software 00:05:19.230 Queue depth: 32 00:05:19.230 Allocate depth: 32 00:05:19.230 # threads/core: 1 00:05:19.230 Run time: 1 seconds 00:05:19.230 Verify: Yes 00:05:19.230 00:05:19.230 Running for 1 seconds... 00:05:19.230 00:05:19.230 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:19.230 ------------------------------------------------------------------------------------ 00:05:19.230 0,0 396896/s 1550 MiB/s 0 0 00:05:19.230 ==================================================================================== 00:05:19.230 Total 396896/s 1550 MiB/s 0 0' 00:05:19.230 06:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:19.230 06:00:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:19.230 06:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:19.230 06:00:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:19.230 06:00:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.230 06:00:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:19.230 06:00:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.230 06:00:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.230 06:00:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:19.230 06:00:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:19.230 06:00:25 -- accel/accel.sh@41 -- # local IFS=, 00:05:19.230 06:00:25 -- accel/accel.sh@42 -- # jq -r . 00:05:19.230 [2024-07-13 06:00:25.595814] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:19.230 [2024-07-13 06:00:25.595912] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006369 ] 00:05:19.230 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.230 [2024-07-13 06:00:25.656938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.489 [2024-07-13 06:00:25.783086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.489 06:00:25 -- accel/accel.sh@21 -- # val= 00:05:19.489 06:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:19.489 06:00:25 -- accel/accel.sh@21 -- # val= 00:05:19.489 06:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:19.489 06:00:25 -- accel/accel.sh@21 -- # val=0x1 00:05:19.489 06:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:19.489 06:00:25 -- accel/accel.sh@21 -- # val= 00:05:19.489 06:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:19.489 06:00:25 -- accel/accel.sh@21 -- # val= 00:05:19.489 06:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:19.489 06:00:25 -- accel/accel.sh@21 -- # val=compare 00:05:19.489 06:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.489 06:00:25 -- accel/accel.sh@24 -- # accel_opc=compare 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:19.489 06:00:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:19.489 06:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:19.489 06:00:25 -- accel/accel.sh@21 -- # val= 00:05:19.489 06:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:19.489 06:00:25 -- accel/accel.sh@21 -- # val=software 00:05:19.489 06:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.489 06:00:25 -- accel/accel.sh@23 -- # accel_module=software 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:19.489 06:00:25 -- accel/accel.sh@21 -- # val=32 00:05:19.489 06:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:19.489 06:00:25 -- accel/accel.sh@21 -- # val=32 00:05:19.489 06:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:19.489 06:00:25 -- accel/accel.sh@21 -- # val=1 00:05:19.489 06:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:19.489 06:00:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:19.489 06:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:19.489 06:00:25 -- accel/accel.sh@21 -- # val=Yes 00:05:19.489 06:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:19.489 06:00:25 -- accel/accel.sh@21 -- # val= 00:05:19.489 06:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:19.489 06:00:25 -- accel/accel.sh@21 -- # val= 00:05:19.489 06:00:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # IFS=: 00:05:19.489 06:00:25 -- accel/accel.sh@20 -- # read -r var val 00:05:20.862 06:00:27 -- accel/accel.sh@21 -- # val= 00:05:20.862 06:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.862 06:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:20.862 06:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:20.862 06:00:27 -- accel/accel.sh@21 -- # val= 00:05:20.862 06:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.862 06:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:20.862 06:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:20.862 06:00:27 -- accel/accel.sh@21 -- # val= 00:05:20.862 06:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.862 06:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:20.862 06:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:20.862 06:00:27 -- accel/accel.sh@21 -- # val= 00:05:20.863 06:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.863 06:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:20.863 06:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:20.863 06:00:27 -- accel/accel.sh@21 -- # val= 00:05:20.863 06:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.863 06:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:20.863 06:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:20.863 06:00:27 -- accel/accel.sh@21 -- # val= 00:05:20.863 06:00:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:20.863 06:00:27 -- accel/accel.sh@20 -- # IFS=: 00:05:20.863 06:00:27 -- accel/accel.sh@20 -- # read -r var val 00:05:20.863 06:00:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:20.863 06:00:27 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:05:20.863 06:00:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:20.863 00:05:20.863 real 0m2.957s 00:05:20.863 user 0m2.663s 00:05:20.863 sys 0m0.284s 00:05:20.863 06:00:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.863 06:00:27 -- common/autotest_common.sh@10 -- # set +x 00:05:20.863 ************************************ 00:05:20.863 END TEST accel_compare 00:05:20.863 ************************************ 00:05:20.863 06:00:27 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:20.863 06:00:27 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:20.863 06:00:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.863 06:00:27 -- common/autotest_common.sh@10 -- # set +x 00:05:20.863 ************************************ 00:05:20.863 START TEST accel_xor 00:05:20.863 ************************************ 00:05:20.863 06:00:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:05:20.863 06:00:27 -- accel/accel.sh@16 -- # local accel_opc 00:05:20.863 06:00:27 -- accel/accel.sh@17 -- # local accel_module 00:05:20.863 06:00:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:05:20.863 06:00:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:20.863 06:00:27 -- accel/accel.sh@12 -- # build_accel_config 00:05:20.863 06:00:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:20.863 06:00:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.863 06:00:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.863 06:00:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:20.863 06:00:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:20.863 06:00:27 -- accel/accel.sh@41 -- # local IFS=, 00:05:20.863 06:00:27 -- accel/accel.sh@42 -- # jq -r . 00:05:20.863 [2024-07-13 06:00:27.097491] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:20.863 [2024-07-13 06:00:27.097574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006529 ] 00:05:20.863 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.863 [2024-07-13 06:00:27.159694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.863 [2024-07-13 06:00:27.279886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.230 06:00:28 -- accel/accel.sh@18 -- # out=' 00:05:22.230 SPDK Configuration: 00:05:22.230 Core mask: 0x1 00:05:22.230 00:05:22.230 Accel Perf Configuration: 00:05:22.230 Workload Type: xor 00:05:22.230 Source buffers: 2 00:05:22.230 Transfer size: 4096 bytes 00:05:22.230 Vector count 1 00:05:22.230 Module: software 00:05:22.230 Queue depth: 32 00:05:22.230 Allocate depth: 32 00:05:22.230 # threads/core: 1 00:05:22.230 Run time: 1 seconds 00:05:22.230 Verify: Yes 00:05:22.230 00:05:22.230 Running for 1 seconds... 00:05:22.230 00:05:22.230 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:22.230 ------------------------------------------------------------------------------------ 00:05:22.230 0,0 192160/s 750 MiB/s 0 0 00:05:22.230 ==================================================================================== 00:05:22.230 Total 192160/s 750 MiB/s 0 0' 00:05:22.230 06:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:22.230 06:00:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:22.230 06:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:22.230 06:00:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:22.230 06:00:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.230 06:00:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:22.230 06:00:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.230 06:00:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.230 06:00:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:22.230 06:00:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:22.230 06:00:28 -- accel/accel.sh@41 -- # local IFS=, 00:05:22.230 06:00:28 -- accel/accel.sh@42 -- # jq -r . 00:05:22.230 [2024-07-13 06:00:28.568489] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:22.230 [2024-07-13 06:00:28.568572] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006678 ] 00:05:22.230 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.230 [2024-07-13 06:00:28.629240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.488 [2024-07-13 06:00:28.749806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.488 06:00:28 -- accel/accel.sh@21 -- # val= 00:05:22.488 06:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:22.488 06:00:28 -- accel/accel.sh@21 -- # val= 00:05:22.488 06:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:22.488 06:00:28 -- accel/accel.sh@21 -- # val=0x1 00:05:22.488 06:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:22.488 06:00:28 -- accel/accel.sh@21 -- # val= 00:05:22.488 06:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:22.488 06:00:28 -- accel/accel.sh@21 -- # val= 00:05:22.488 06:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:22.488 06:00:28 -- accel/accel.sh@21 -- # val=xor 00:05:22.488 06:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.488 06:00:28 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:22.488 06:00:28 -- accel/accel.sh@21 -- # val=2 00:05:22.488 06:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:22.488 06:00:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:22.488 06:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:22.488 06:00:28 -- accel/accel.sh@21 -- # val= 00:05:22.488 06:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:22.488 06:00:28 -- accel/accel.sh@21 -- # val=software 00:05:22.488 06:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.488 06:00:28 -- accel/accel.sh@23 -- # accel_module=software 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:22.488 06:00:28 -- accel/accel.sh@21 -- # val=32 00:05:22.488 06:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:22.488 06:00:28 -- accel/accel.sh@21 -- # val=32 00:05:22.488 06:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:22.488 06:00:28 -- accel/accel.sh@21 -- # val=1 00:05:22.488 06:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:22.488 06:00:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:22.488 06:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:22.488 06:00:28 -- accel/accel.sh@21 -- # val=Yes 00:05:22.488 06:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:22.488 06:00:28 -- accel/accel.sh@21 -- # val= 00:05:22.488 06:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:22.488 06:00:28 -- accel/accel.sh@21 -- # val= 00:05:22.488 06:00:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # IFS=: 00:05:22.488 06:00:28 -- accel/accel.sh@20 -- # read -r var val 00:05:23.856 06:00:30 -- accel/accel.sh@21 -- # val= 00:05:23.857 06:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.857 06:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:23.857 06:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:23.857 06:00:30 -- accel/accel.sh@21 -- # val= 00:05:23.857 06:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.857 06:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:23.857 06:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:23.857 06:00:30 -- accel/accel.sh@21 -- # val= 00:05:23.857 06:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.857 06:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:23.857 06:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:23.857 06:00:30 -- accel/accel.sh@21 -- # val= 00:05:23.857 06:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.857 06:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:23.857 06:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:23.857 06:00:30 -- accel/accel.sh@21 -- # val= 00:05:23.857 06:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.857 06:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:23.857 06:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:23.857 06:00:30 -- accel/accel.sh@21 -- # val= 00:05:23.857 06:00:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:23.857 06:00:30 -- accel/accel.sh@20 -- # IFS=: 00:05:23.857 06:00:30 -- accel/accel.sh@20 -- # read -r var val 00:05:23.857 06:00:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:23.857 06:00:30 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:23.857 06:00:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.857 00:05:23.857 real 0m2.952s 00:05:23.857 user 0m2.653s 00:05:23.857 sys 0m0.289s 00:05:23.857 06:00:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.857 06:00:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.857 ************************************ 00:05:23.857 END TEST accel_xor 00:05:23.857 ************************************ 00:05:23.857 06:00:30 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:23.857 06:00:30 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:23.857 06:00:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.857 06:00:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.857 ************************************ 00:05:23.857 START TEST accel_xor 00:05:23.857 ************************************ 00:05:23.857 06:00:30 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:05:23.857 06:00:30 -- accel/accel.sh@16 -- # local accel_opc 00:05:23.857 06:00:30 -- accel/accel.sh@17 -- # local accel_module 00:05:23.857 06:00:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:05:23.857 06:00:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:23.857 06:00:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:23.857 06:00:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:23.857 06:00:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.857 06:00:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.857 06:00:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:23.857 06:00:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:23.857 06:00:30 -- accel/accel.sh@41 -- # local IFS=, 00:05:23.857 06:00:30 -- accel/accel.sh@42 -- # jq -r . 00:05:23.857 [2024-07-13 06:00:30.073776] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:23.857 [2024-07-13 06:00:30.073881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1006942 ] 00:05:23.857 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.857 [2024-07-13 06:00:30.138472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.857 [2024-07-13 06:00:30.259405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.224 06:00:31 -- accel/accel.sh@18 -- # out=' 00:05:25.224 SPDK Configuration: 00:05:25.224 Core mask: 0x1 00:05:25.224 00:05:25.224 Accel Perf Configuration: 00:05:25.224 Workload Type: xor 00:05:25.224 Source buffers: 3 00:05:25.224 Transfer size: 4096 bytes 00:05:25.224 Vector count 1 00:05:25.224 Module: software 00:05:25.224 Queue depth: 32 00:05:25.224 Allocate depth: 32 00:05:25.224 # threads/core: 1 00:05:25.224 Run time: 1 seconds 00:05:25.224 Verify: Yes 00:05:25.224 00:05:25.224 Running for 1 seconds... 00:05:25.224 00:05:25.224 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:25.224 ------------------------------------------------------------------------------------ 00:05:25.224 0,0 182976/s 714 MiB/s 0 0 00:05:25.224 ==================================================================================== 00:05:25.224 Total 182976/s 714 MiB/s 0 0' 00:05:25.224 06:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:25.224 06:00:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:25.224 06:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:25.224 06:00:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:25.224 06:00:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.224 06:00:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:25.224 06:00:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.224 06:00:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.224 06:00:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:25.224 06:00:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:25.224 06:00:31 -- accel/accel.sh@41 -- # local IFS=, 00:05:25.224 06:00:31 -- accel/accel.sh@42 -- # jq -r . 00:05:25.224 [2024-07-13 06:00:31.552574] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:25.224 [2024-07-13 06:00:31.552656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007094 ] 00:05:25.224 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.224 [2024-07-13 06:00:31.614681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.480 [2024-07-13 06:00:31.735425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.480 06:00:31 -- accel/accel.sh@21 -- # val= 00:05:25.480 06:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.480 06:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:25.480 06:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:25.480 06:00:31 -- accel/accel.sh@21 -- # val= 00:05:25.480 06:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.480 06:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:25.480 06:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:25.480 06:00:31 -- accel/accel.sh@21 -- # val=0x1 00:05:25.480 06:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.480 06:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:25.480 06:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:25.480 06:00:31 -- accel/accel.sh@21 -- # val= 00:05:25.480 06:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.480 06:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:25.480 06:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:25.480 06:00:31 -- accel/accel.sh@21 -- # val= 00:05:25.480 06:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:25.481 06:00:31 -- accel/accel.sh@21 -- # val=xor 00:05:25.481 06:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.481 06:00:31 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:25.481 06:00:31 -- accel/accel.sh@21 -- # val=3 00:05:25.481 06:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:25.481 06:00:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:25.481 06:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:25.481 06:00:31 -- accel/accel.sh@21 -- # val= 00:05:25.481 06:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:25.481 06:00:31 -- accel/accel.sh@21 -- # val=software 00:05:25.481 06:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.481 06:00:31 -- accel/accel.sh@23 -- # accel_module=software 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:25.481 06:00:31 -- accel/accel.sh@21 -- # val=32 00:05:25.481 06:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:25.481 06:00:31 -- accel/accel.sh@21 -- # val=32 00:05:25.481 06:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:25.481 06:00:31 -- accel/accel.sh@21 -- # val=1 00:05:25.481 06:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:25.481 06:00:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:25.481 06:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:25.481 06:00:31 -- accel/accel.sh@21 -- # val=Yes 00:05:25.481 06:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:25.481 06:00:31 -- accel/accel.sh@21 -- # val= 00:05:25.481 06:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:25.481 06:00:31 -- accel/accel.sh@21 -- # val= 00:05:25.481 06:00:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # IFS=: 00:05:25.481 06:00:31 -- accel/accel.sh@20 -- # read -r var val 00:05:26.854 06:00:33 -- accel/accel.sh@21 -- # val= 00:05:26.854 06:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.854 06:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:26.854 06:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:26.854 06:00:33 -- accel/accel.sh@21 -- # val= 00:05:26.854 06:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.854 06:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:26.854 06:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:26.854 06:00:33 -- accel/accel.sh@21 -- # val= 00:05:26.854 06:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.854 06:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:26.854 06:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:26.854 06:00:33 -- accel/accel.sh@21 -- # val= 00:05:26.854 06:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.854 06:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:26.854 06:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:26.854 06:00:33 -- accel/accel.sh@21 -- # val= 00:05:26.854 06:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.854 06:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:26.854 06:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:26.854 06:00:33 -- accel/accel.sh@21 -- # val= 00:05:26.854 06:00:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:26.854 06:00:33 -- accel/accel.sh@20 -- # IFS=: 00:05:26.854 06:00:33 -- accel/accel.sh@20 -- # read -r var val 00:05:26.854 06:00:33 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:26.855 06:00:33 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:26.855 06:00:33 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.855 00:05:26.855 real 0m2.962s 00:05:26.855 user 0m2.677s 00:05:26.855 sys 0m0.276s 00:05:26.855 06:00:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.855 06:00:33 -- common/autotest_common.sh@10 -- # set +x 00:05:26.855 ************************************ 00:05:26.855 END TEST accel_xor 00:05:26.855 ************************************ 00:05:26.855 06:00:33 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:26.855 06:00:33 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:26.855 06:00:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:26.855 06:00:33 -- common/autotest_common.sh@10 -- # set +x 00:05:26.855 ************************************ 00:05:26.855 START TEST accel_dif_verify 00:05:26.855 ************************************ 00:05:26.855 06:00:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:05:26.855 06:00:33 -- accel/accel.sh@16 -- # local accel_opc 00:05:26.855 06:00:33 -- accel/accel.sh@17 -- # local accel_module 00:05:26.855 06:00:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:05:26.856 06:00:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:26.856 06:00:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:26.856 06:00:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:26.856 06:00:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:26.856 06:00:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:26.856 06:00:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:26.856 06:00:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:26.856 06:00:33 -- accel/accel.sh@41 -- # local IFS=, 00:05:26.856 06:00:33 -- accel/accel.sh@42 -- # jq -r . 00:05:26.856 [2024-07-13 06:00:33.059921] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:26.856 [2024-07-13 06:00:33.059998] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007260 ] 00:05:26.856 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.856 [2024-07-13 06:00:33.121845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.856 [2024-07-13 06:00:33.242633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.230 06:00:34 -- accel/accel.sh@18 -- # out=' 00:05:28.230 SPDK Configuration: 00:05:28.230 Core mask: 0x1 00:05:28.230 00:05:28.230 Accel Perf Configuration: 00:05:28.230 Workload Type: dif_verify 00:05:28.230 Vector size: 4096 bytes 00:05:28.230 Transfer size: 4096 bytes 00:05:28.230 Block size: 512 bytes 00:05:28.230 Metadata size: 8 bytes 00:05:28.230 Vector count 1 00:05:28.230 Module: software 00:05:28.230 Queue depth: 32 00:05:28.230 Allocate depth: 32 00:05:28.230 # threads/core: 1 00:05:28.230 Run time: 1 seconds 00:05:28.230 Verify: No 00:05:28.230 00:05:28.230 Running for 1 seconds... 00:05:28.230 00:05:28.230 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:28.230 ------------------------------------------------------------------------------------ 00:05:28.230 0,0 81888/s 324 MiB/s 0 0 00:05:28.230 ==================================================================================== 00:05:28.230 Total 81888/s 319 MiB/s 0 0' 00:05:28.230 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.230 06:00:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:28.230 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.230 06:00:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:28.230 06:00:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:28.230 06:00:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:28.230 06:00:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.230 06:00:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.230 06:00:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:28.230 06:00:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:28.230 06:00:34 -- accel/accel.sh@41 -- # local IFS=, 00:05:28.230 06:00:34 -- accel/accel.sh@42 -- # jq -r . 00:05:28.230 [2024-07-13 06:00:34.542290] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:28.230 [2024-07-13 06:00:34.542372] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007418 ] 00:05:28.230 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.230 [2024-07-13 06:00:34.608410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.230 [2024-07-13 06:00:34.728681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.496 06:00:34 -- accel/accel.sh@21 -- # val= 00:05:28.496 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.496 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.496 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.496 06:00:34 -- accel/accel.sh@21 -- # val= 00:05:28.496 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.496 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.496 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.496 06:00:34 -- accel/accel.sh@21 -- # val=0x1 00:05:28.496 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.496 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.496 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.496 06:00:34 -- accel/accel.sh@21 -- # val= 00:05:28.496 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.496 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.496 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.496 06:00:34 -- accel/accel.sh@21 -- # val= 00:05:28.496 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.496 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.496 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.496 06:00:34 -- accel/accel.sh@21 -- # val=dif_verify 00:05:28.496 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.496 06:00:34 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:05:28.496 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.496 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.496 06:00:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:28.496 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.496 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.496 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.496 06:00:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:28.496 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.496 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.496 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.496 06:00:34 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:28.496 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.496 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.496 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.497 06:00:34 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:28.497 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.497 06:00:34 -- accel/accel.sh@21 -- # val= 00:05:28.497 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.497 06:00:34 -- accel/accel.sh@21 -- # val=software 00:05:28.497 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.497 06:00:34 -- accel/accel.sh@23 -- # accel_module=software 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.497 06:00:34 -- accel/accel.sh@21 -- # val=32 00:05:28.497 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.497 06:00:34 -- accel/accel.sh@21 -- # val=32 00:05:28.497 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.497 06:00:34 -- accel/accel.sh@21 -- # val=1 00:05:28.497 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.497 06:00:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:28.497 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.497 06:00:34 -- accel/accel.sh@21 -- # val=No 00:05:28.497 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.497 06:00:34 -- accel/accel.sh@21 -- # val= 00:05:28.497 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:28.497 06:00:34 -- accel/accel.sh@21 -- # val= 00:05:28.497 06:00:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # IFS=: 00:05:28.497 06:00:34 -- accel/accel.sh@20 -- # read -r var val 00:05:29.876 06:00:36 -- accel/accel.sh@21 -- # val= 00:05:29.876 06:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.876 06:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:29.876 06:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:29.876 06:00:36 -- accel/accel.sh@21 -- # val= 00:05:29.876 06:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.876 06:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:29.876 06:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:29.876 06:00:36 -- accel/accel.sh@21 -- # val= 00:05:29.876 06:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.876 06:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:29.876 06:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:29.876 06:00:36 -- accel/accel.sh@21 -- # val= 00:05:29.876 06:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.876 06:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:29.876 06:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:29.876 06:00:36 -- accel/accel.sh@21 -- # val= 00:05:29.876 06:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.876 06:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:29.876 06:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:29.876 06:00:36 -- accel/accel.sh@21 -- # val= 00:05:29.876 06:00:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:29.876 06:00:36 -- accel/accel.sh@20 -- # IFS=: 00:05:29.876 06:00:36 -- accel/accel.sh@20 -- # read -r var val 00:05:29.876 06:00:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:29.876 06:00:36 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:05:29.876 06:00:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.876 00:05:29.876 real 0m2.973s 00:05:29.876 user 0m2.666s 00:05:29.876 sys 0m0.300s 00:05:29.876 06:00:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.876 06:00:36 -- common/autotest_common.sh@10 -- # set +x 00:05:29.876 ************************************ 00:05:29.876 END TEST accel_dif_verify 00:05:29.876 ************************************ 00:05:29.876 06:00:36 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:29.876 06:00:36 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:29.876 06:00:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.876 06:00:36 -- common/autotest_common.sh@10 -- # set +x 00:05:29.876 ************************************ 00:05:29.876 START TEST accel_dif_generate 00:05:29.876 ************************************ 00:05:29.876 06:00:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:05:29.876 06:00:36 -- accel/accel.sh@16 -- # local accel_opc 00:05:29.876 06:00:36 -- accel/accel.sh@17 -- # local accel_module 00:05:29.876 06:00:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:05:29.876 06:00:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:29.876 06:00:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:29.876 06:00:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:29.876 06:00:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.876 06:00:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.876 06:00:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:29.876 06:00:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:29.876 06:00:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:29.876 06:00:36 -- accel/accel.sh@42 -- # jq -r . 00:05:29.876 [2024-07-13 06:00:36.056335] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:29.876 [2024-07-13 06:00:36.056414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007679 ] 00:05:29.876 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.876 [2024-07-13 06:00:36.117788] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.876 [2024-07-13 06:00:36.235285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.249 06:00:37 -- accel/accel.sh@18 -- # out=' 00:05:31.249 SPDK Configuration: 00:05:31.249 Core mask: 0x1 00:05:31.249 00:05:31.249 Accel Perf Configuration: 00:05:31.249 Workload Type: dif_generate 00:05:31.249 Vector size: 4096 bytes 00:05:31.249 Transfer size: 4096 bytes 00:05:31.249 Block size: 512 bytes 00:05:31.249 Metadata size: 8 bytes 00:05:31.249 Vector count 1 00:05:31.249 Module: software 00:05:31.249 Queue depth: 32 00:05:31.249 Allocate depth: 32 00:05:31.249 # threads/core: 1 00:05:31.249 Run time: 1 seconds 00:05:31.249 Verify: No 00:05:31.249 00:05:31.249 Running for 1 seconds... 00:05:31.249 00:05:31.249 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:31.249 ------------------------------------------------------------------------------------ 00:05:31.249 0,0 96288/s 382 MiB/s 0 0 00:05:31.249 ==================================================================================== 00:05:31.249 Total 96288/s 376 MiB/s 0 0' 00:05:31.249 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.249 06:00:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:31.249 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.249 06:00:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:31.249 06:00:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:31.249 06:00:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:31.249 06:00:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.249 06:00:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.249 06:00:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:31.249 06:00:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:31.249 06:00:37 -- accel/accel.sh@41 -- # local IFS=, 00:05:31.249 06:00:37 -- accel/accel.sh@42 -- # jq -r . 00:05:31.249 [2024-07-13 06:00:37.527410] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:31.249 [2024-07-13 06:00:37.527494] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007822 ] 00:05:31.249 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.249 [2024-07-13 06:00:37.588304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.249 [2024-07-13 06:00:37.708590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.507 06:00:37 -- accel/accel.sh@21 -- # val= 00:05:31.507 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.507 06:00:37 -- accel/accel.sh@21 -- # val= 00:05:31.507 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.507 06:00:37 -- accel/accel.sh@21 -- # val=0x1 00:05:31.507 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.507 06:00:37 -- accel/accel.sh@21 -- # val= 00:05:31.507 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.507 06:00:37 -- accel/accel.sh@21 -- # val= 00:05:31.507 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.507 06:00:37 -- accel/accel.sh@21 -- # val=dif_generate 00:05:31.507 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.507 06:00:37 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.507 06:00:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:31.507 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.507 06:00:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:31.507 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.507 06:00:37 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:31.507 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.507 06:00:37 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:31.507 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.507 06:00:37 -- accel/accel.sh@21 -- # val= 00:05:31.507 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.507 06:00:37 -- accel/accel.sh@21 -- # val=software 00:05:31.507 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.507 06:00:37 -- accel/accel.sh@23 -- # accel_module=software 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.507 06:00:37 -- accel/accel.sh@21 -- # val=32 00:05:31.507 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.507 06:00:37 -- accel/accel.sh@21 -- # val=32 00:05:31.507 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.507 06:00:37 -- accel/accel.sh@21 -- # val=1 00:05:31.507 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.507 06:00:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:31.507 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.507 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.507 06:00:37 -- accel/accel.sh@21 -- # val=No 00:05:31.508 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.508 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.508 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.508 06:00:37 -- accel/accel.sh@21 -- # val= 00:05:31.508 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.508 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.508 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:31.508 06:00:37 -- accel/accel.sh@21 -- # val= 00:05:31.508 06:00:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.508 06:00:37 -- accel/accel.sh@20 -- # IFS=: 00:05:31.508 06:00:37 -- accel/accel.sh@20 -- # read -r var val 00:05:32.883 06:00:38 -- accel/accel.sh@21 -- # val= 00:05:32.883 06:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.883 06:00:38 -- accel/accel.sh@20 -- # IFS=: 00:05:32.883 06:00:38 -- accel/accel.sh@20 -- # read -r var val 00:05:32.883 06:00:38 -- accel/accel.sh@21 -- # val= 00:05:32.883 06:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.883 06:00:38 -- accel/accel.sh@20 -- # IFS=: 00:05:32.883 06:00:38 -- accel/accel.sh@20 -- # read -r var val 00:05:32.883 06:00:38 -- accel/accel.sh@21 -- # val= 00:05:32.883 06:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.883 06:00:38 -- accel/accel.sh@20 -- # IFS=: 00:05:32.883 06:00:38 -- accel/accel.sh@20 -- # read -r var val 00:05:32.883 06:00:38 -- accel/accel.sh@21 -- # val= 00:05:32.883 06:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.883 06:00:38 -- accel/accel.sh@20 -- # IFS=: 00:05:32.883 06:00:38 -- accel/accel.sh@20 -- # read -r var val 00:05:32.883 06:00:38 -- accel/accel.sh@21 -- # val= 00:05:32.883 06:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.883 06:00:38 -- accel/accel.sh@20 -- # IFS=: 00:05:32.883 06:00:38 -- accel/accel.sh@20 -- # read -r var val 00:05:32.883 06:00:38 -- accel/accel.sh@21 -- # val= 00:05:32.883 06:00:38 -- accel/accel.sh@22 -- # case "$var" in 00:05:32.883 06:00:38 -- accel/accel.sh@20 -- # IFS=: 00:05:32.883 06:00:38 -- accel/accel.sh@20 -- # read -r var val 00:05:32.883 06:00:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:32.883 06:00:38 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:05:32.883 06:00:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.883 00:05:32.883 real 0m2.952s 00:05:32.883 user 0m2.661s 00:05:32.883 sys 0m0.284s 00:05:32.883 06:00:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.883 06:00:38 -- common/autotest_common.sh@10 -- # set +x 00:05:32.883 ************************************ 00:05:32.883 END TEST accel_dif_generate 00:05:32.883 ************************************ 00:05:32.883 06:00:39 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:32.883 06:00:39 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:32.883 06:00:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.883 06:00:39 -- common/autotest_common.sh@10 -- # set +x 00:05:32.883 ************************************ 00:05:32.883 START TEST accel_dif_generate_copy 00:05:32.883 ************************************ 00:05:32.883 06:00:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:05:32.883 06:00:39 -- accel/accel.sh@16 -- # local accel_opc 00:05:32.883 06:00:39 -- accel/accel.sh@17 -- # local accel_module 00:05:32.883 06:00:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:05:32.883 06:00:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:32.883 06:00:39 -- accel/accel.sh@12 -- # build_accel_config 00:05:32.883 06:00:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:32.883 06:00:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:32.883 06:00:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:32.883 06:00:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:32.883 06:00:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:32.883 06:00:39 -- accel/accel.sh@41 -- # local IFS=, 00:05:32.883 06:00:39 -- accel/accel.sh@42 -- # jq -r . 00:05:32.883 [2024-07-13 06:00:39.030881] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:32.883 [2024-07-13 06:00:39.030961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1007988 ] 00:05:32.883 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.883 [2024-07-13 06:00:39.094565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.883 [2024-07-13 06:00:39.214377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.257 06:00:40 -- accel/accel.sh@18 -- # out=' 00:05:34.257 SPDK Configuration: 00:05:34.257 Core mask: 0x1 00:05:34.257 00:05:34.257 Accel Perf Configuration: 00:05:34.257 Workload Type: dif_generate_copy 00:05:34.257 Vector size: 4096 bytes 00:05:34.257 Transfer size: 4096 bytes 00:05:34.257 Vector count 1 00:05:34.257 Module: software 00:05:34.257 Queue depth: 32 00:05:34.257 Allocate depth: 32 00:05:34.257 # threads/core: 1 00:05:34.257 Run time: 1 seconds 00:05:34.257 Verify: No 00:05:34.257 00:05:34.257 Running for 1 seconds... 00:05:34.257 00:05:34.257 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:34.257 ------------------------------------------------------------------------------------ 00:05:34.257 0,0 76000/s 301 MiB/s 0 0 00:05:34.257 ==================================================================================== 00:05:34.257 Total 76000/s 296 MiB/s 0 0' 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:34.257 06:00:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:34.257 06:00:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:34.257 06:00:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:34.257 06:00:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:34.257 06:00:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.257 06:00:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.257 06:00:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:34.257 06:00:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:34.257 06:00:40 -- accel/accel.sh@41 -- # local IFS=, 00:05:34.257 06:00:40 -- accel/accel.sh@42 -- # jq -r . 00:05:34.257 [2024-07-13 06:00:40.513570] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:34.257 [2024-07-13 06:00:40.513653] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008246 ] 00:05:34.257 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.257 [2024-07-13 06:00:40.574486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.257 [2024-07-13 06:00:40.695204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.257 06:00:40 -- accel/accel.sh@21 -- # val= 00:05:34.257 06:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:34.257 06:00:40 -- accel/accel.sh@21 -- # val= 00:05:34.257 06:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:34.257 06:00:40 -- accel/accel.sh@21 -- # val=0x1 00:05:34.257 06:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:34.257 06:00:40 -- accel/accel.sh@21 -- # val= 00:05:34.257 06:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:34.257 06:00:40 -- accel/accel.sh@21 -- # val= 00:05:34.257 06:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:34.257 06:00:40 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:05:34.257 06:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.257 06:00:40 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:34.257 06:00:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:34.257 06:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:34.257 06:00:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:34.257 06:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:34.257 06:00:40 -- accel/accel.sh@21 -- # val= 00:05:34.257 06:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:34.257 06:00:40 -- accel/accel.sh@21 -- # val=software 00:05:34.257 06:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.257 06:00:40 -- accel/accel.sh@23 -- # accel_module=software 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:34.257 06:00:40 -- accel/accel.sh@21 -- # val=32 00:05:34.257 06:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:34.257 06:00:40 -- accel/accel.sh@21 -- # val=32 00:05:34.257 06:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:34.257 06:00:40 -- accel/accel.sh@21 -- # val=1 00:05:34.257 06:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:34.257 06:00:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:34.257 06:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:34.257 06:00:40 -- accel/accel.sh@21 -- # val=No 00:05:34.257 06:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:34.257 06:00:40 -- accel/accel.sh@21 -- # val= 00:05:34.257 06:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:34.257 06:00:40 -- accel/accel.sh@21 -- # val= 00:05:34.257 06:00:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # IFS=: 00:05:34.257 06:00:40 -- accel/accel.sh@20 -- # read -r var val 00:05:35.627 06:00:41 -- accel/accel.sh@21 -- # val= 00:05:35.627 06:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.627 06:00:41 -- accel/accel.sh@20 -- # IFS=: 00:05:35.627 06:00:41 -- accel/accel.sh@20 -- # read -r var val 00:05:35.627 06:00:41 -- accel/accel.sh@21 -- # val= 00:05:35.627 06:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.627 06:00:41 -- accel/accel.sh@20 -- # IFS=: 00:05:35.627 06:00:41 -- accel/accel.sh@20 -- # read -r var val 00:05:35.627 06:00:41 -- accel/accel.sh@21 -- # val= 00:05:35.627 06:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.627 06:00:41 -- accel/accel.sh@20 -- # IFS=: 00:05:35.627 06:00:41 -- accel/accel.sh@20 -- # read -r var val 00:05:35.627 06:00:41 -- accel/accel.sh@21 -- # val= 00:05:35.627 06:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.627 06:00:41 -- accel/accel.sh@20 -- # IFS=: 00:05:35.627 06:00:41 -- accel/accel.sh@20 -- # read -r var val 00:05:35.627 06:00:41 -- accel/accel.sh@21 -- # val= 00:05:35.627 06:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.627 06:00:41 -- accel/accel.sh@20 -- # IFS=: 00:05:35.627 06:00:41 -- accel/accel.sh@20 -- # read -r var val 00:05:35.627 06:00:41 -- accel/accel.sh@21 -- # val= 00:05:35.627 06:00:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:35.627 06:00:41 -- accel/accel.sh@20 -- # IFS=: 00:05:35.627 06:00:41 -- accel/accel.sh@20 -- # read -r var val 00:05:35.627 06:00:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:35.627 06:00:41 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:05:35.627 06:00:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:35.627 00:05:35.627 real 0m2.965s 00:05:35.627 user 0m2.665s 00:05:35.627 sys 0m0.291s 00:05:35.627 06:00:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.627 06:00:41 -- common/autotest_common.sh@10 -- # set +x 00:05:35.627 ************************************ 00:05:35.627 END TEST accel_dif_generate_copy 00:05:35.627 ************************************ 00:05:35.627 06:00:41 -- accel/accel.sh@107 -- # [[ y == y ]] 00:05:35.627 06:00:41 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:35.627 06:00:41 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:35.627 06:00:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.627 06:00:41 -- common/autotest_common.sh@10 -- # set +x 00:05:35.627 ************************************ 00:05:35.627 START TEST accel_comp 00:05:35.627 ************************************ 00:05:35.627 06:00:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:35.627 06:00:42 -- accel/accel.sh@16 -- # local accel_opc 00:05:35.627 06:00:42 -- accel/accel.sh@17 -- # local accel_module 00:05:35.627 06:00:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:35.627 06:00:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:35.627 06:00:42 -- accel/accel.sh@12 -- # build_accel_config 00:05:35.627 06:00:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:35.627 06:00:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.627 06:00:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.627 06:00:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:35.627 06:00:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:35.627 06:00:42 -- accel/accel.sh@41 -- # local IFS=, 00:05:35.627 06:00:42 -- accel/accel.sh@42 -- # jq -r . 00:05:35.627 [2024-07-13 06:00:42.021531] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:35.627 [2024-07-13 06:00:42.021624] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008411 ] 00:05:35.627 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.627 [2024-07-13 06:00:42.085404] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.884 [2024-07-13 06:00:42.202168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.254 06:00:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:37.254 00:05:37.254 SPDK Configuration: 00:05:37.254 Core mask: 0x1 00:05:37.254 00:05:37.254 Accel Perf Configuration: 00:05:37.254 Workload Type: compress 00:05:37.254 Transfer size: 4096 bytes 00:05:37.254 Vector count 1 00:05:37.254 Module: software 00:05:37.254 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.254 Queue depth: 32 00:05:37.254 Allocate depth: 32 00:05:37.254 # threads/core: 1 00:05:37.254 Run time: 1 seconds 00:05:37.254 Verify: No 00:05:37.254 00:05:37.254 Running for 1 seconds... 00:05:37.254 00:05:37.254 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:37.254 ------------------------------------------------------------------------------------ 00:05:37.254 0,0 32256/s 134 MiB/s 0 0 00:05:37.254 ==================================================================================== 00:05:37.254 Total 32256/s 126 MiB/s 0 0' 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.254 06:00:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.254 06:00:43 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.254 06:00:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:37.254 06:00:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.254 06:00:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.254 06:00:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:37.254 06:00:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:37.254 06:00:43 -- accel/accel.sh@41 -- # local IFS=, 00:05:37.254 06:00:43 -- accel/accel.sh@42 -- # jq -r . 00:05:37.254 [2024-07-13 06:00:43.498352] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:37.254 [2024-07-13 06:00:43.498436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008555 ] 00:05:37.254 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.254 [2024-07-13 06:00:43.559832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.254 [2024-07-13 06:00:43.679178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.254 06:00:43 -- accel/accel.sh@21 -- # val= 00:05:37.254 06:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.254 06:00:43 -- accel/accel.sh@21 -- # val= 00:05:37.254 06:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.254 06:00:43 -- accel/accel.sh@21 -- # val= 00:05:37.254 06:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.254 06:00:43 -- accel/accel.sh@21 -- # val=0x1 00:05:37.254 06:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.254 06:00:43 -- accel/accel.sh@21 -- # val= 00:05:37.254 06:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.254 06:00:43 -- accel/accel.sh@21 -- # val= 00:05:37.254 06:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.254 06:00:43 -- accel/accel.sh@21 -- # val=compress 00:05:37.254 06:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.254 06:00:43 -- accel/accel.sh@24 -- # accel_opc=compress 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.254 06:00:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:37.254 06:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.254 06:00:43 -- accel/accel.sh@21 -- # val= 00:05:37.254 06:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.254 06:00:43 -- accel/accel.sh@21 -- # val=software 00:05:37.254 06:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.254 06:00:43 -- accel/accel.sh@23 -- # accel_module=software 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.254 06:00:43 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:37.254 06:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.254 06:00:43 -- accel/accel.sh@21 -- # val=32 00:05:37.254 06:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.254 06:00:43 -- accel/accel.sh@21 -- # val=32 00:05:37.254 06:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.254 06:00:43 -- accel/accel.sh@21 -- # val=1 00:05:37.254 06:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.254 06:00:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:37.254 06:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.254 06:00:43 -- accel/accel.sh@21 -- # val=No 00:05:37.254 06:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.254 06:00:43 -- accel/accel.sh@21 -- # val= 00:05:37.254 06:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:37.254 06:00:43 -- accel/accel.sh@21 -- # val= 00:05:37.254 06:00:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # IFS=: 00:05:37.254 06:00:43 -- accel/accel.sh@20 -- # read -r var val 00:05:38.623 06:00:44 -- accel/accel.sh@21 -- # val= 00:05:38.623 06:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.623 06:00:44 -- accel/accel.sh@20 -- # IFS=: 00:05:38.623 06:00:44 -- accel/accel.sh@20 -- # read -r var val 00:05:38.623 06:00:44 -- accel/accel.sh@21 -- # val= 00:05:38.623 06:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.623 06:00:44 -- accel/accel.sh@20 -- # IFS=: 00:05:38.623 06:00:44 -- accel/accel.sh@20 -- # read -r var val 00:05:38.624 06:00:44 -- accel/accel.sh@21 -- # val= 00:05:38.624 06:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.624 06:00:44 -- accel/accel.sh@20 -- # IFS=: 00:05:38.624 06:00:44 -- accel/accel.sh@20 -- # read -r var val 00:05:38.624 06:00:44 -- accel/accel.sh@21 -- # val= 00:05:38.624 06:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.624 06:00:44 -- accel/accel.sh@20 -- # IFS=: 00:05:38.624 06:00:44 -- accel/accel.sh@20 -- # read -r var val 00:05:38.624 06:00:44 -- accel/accel.sh@21 -- # val= 00:05:38.624 06:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.624 06:00:44 -- accel/accel.sh@20 -- # IFS=: 00:05:38.624 06:00:44 -- accel/accel.sh@20 -- # read -r var val 00:05:38.624 06:00:44 -- accel/accel.sh@21 -- # val= 00:05:38.624 06:00:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:38.624 06:00:44 -- accel/accel.sh@20 -- # IFS=: 00:05:38.624 06:00:44 -- accel/accel.sh@20 -- # read -r var val 00:05:38.624 06:00:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:38.624 06:00:44 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:05:38.624 06:00:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.624 00:05:38.624 real 0m2.954s 00:05:38.624 user 0m2.652s 00:05:38.624 sys 0m0.295s 00:05:38.624 06:00:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.624 06:00:44 -- common/autotest_common.sh@10 -- # set +x 00:05:38.624 ************************************ 00:05:38.624 END TEST accel_comp 00:05:38.624 ************************************ 00:05:38.624 06:00:44 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.624 06:00:44 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:38.624 06:00:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.624 06:00:44 -- common/autotest_common.sh@10 -- # set +x 00:05:38.624 ************************************ 00:05:38.624 START TEST accel_decomp 00:05:38.624 ************************************ 00:05:38.624 06:00:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.624 06:00:44 -- accel/accel.sh@16 -- # local accel_opc 00:05:38.624 06:00:44 -- accel/accel.sh@17 -- # local accel_module 00:05:38.624 06:00:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.624 06:00:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:38.624 06:00:44 -- accel/accel.sh@12 -- # build_accel_config 00:05:38.624 06:00:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:38.624 06:00:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.624 06:00:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.624 06:00:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:38.624 06:00:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:38.624 06:00:44 -- accel/accel.sh@41 -- # local IFS=, 00:05:38.624 06:00:44 -- accel/accel.sh@42 -- # jq -r . 00:05:38.624 [2024-07-13 06:00:44.999434] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:38.624 [2024-07-13 06:00:44.999518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008835 ] 00:05:38.624 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.624 [2024-07-13 06:00:45.061471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.882 [2024-07-13 06:00:45.182571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.263 06:00:46 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:40.263 00:05:40.263 SPDK Configuration: 00:05:40.263 Core mask: 0x1 00:05:40.263 00:05:40.263 Accel Perf Configuration: 00:05:40.263 Workload Type: decompress 00:05:40.263 Transfer size: 4096 bytes 00:05:40.263 Vector count 1 00:05:40.263 Module: software 00:05:40.263 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:40.263 Queue depth: 32 00:05:40.263 Allocate depth: 32 00:05:40.263 # threads/core: 1 00:05:40.263 Run time: 1 seconds 00:05:40.263 Verify: Yes 00:05:40.263 00:05:40.263 Running for 1 seconds... 00:05:40.263 00:05:40.263 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:40.263 ------------------------------------------------------------------------------------ 00:05:40.263 0,0 55424/s 102 MiB/s 0 0 00:05:40.263 ==================================================================================== 00:05:40.263 Total 55424/s 216 MiB/s 0 0' 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.263 06:00:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.263 06:00:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.263 06:00:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.263 06:00:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.263 06:00:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.263 06:00:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.263 06:00:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.263 06:00:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.263 06:00:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.263 06:00:46 -- accel/accel.sh@42 -- # jq -r . 00:05:40.263 [2024-07-13 06:00:46.487273] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:40.263 [2024-07-13 06:00:46.487358] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008973 ] 00:05:40.263 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.263 [2024-07-13 06:00:46.553038] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.263 [2024-07-13 06:00:46.673999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.263 06:00:46 -- accel/accel.sh@21 -- # val= 00:05:40.263 06:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.263 06:00:46 -- accel/accel.sh@21 -- # val= 00:05:40.263 06:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.263 06:00:46 -- accel/accel.sh@21 -- # val= 00:05:40.263 06:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.263 06:00:46 -- accel/accel.sh@21 -- # val=0x1 00:05:40.263 06:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.263 06:00:46 -- accel/accel.sh@21 -- # val= 00:05:40.263 06:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.263 06:00:46 -- accel/accel.sh@21 -- # val= 00:05:40.263 06:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.263 06:00:46 -- accel/accel.sh@21 -- # val=decompress 00:05:40.263 06:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.263 06:00:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.263 06:00:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:40.263 06:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.263 06:00:46 -- accel/accel.sh@21 -- # val= 00:05:40.263 06:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.263 06:00:46 -- accel/accel.sh@21 -- # val=software 00:05:40.263 06:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.263 06:00:46 -- accel/accel.sh@23 -- # accel_module=software 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.263 06:00:46 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:40.263 06:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.263 06:00:46 -- accel/accel.sh@21 -- # val=32 00:05:40.263 06:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.263 06:00:46 -- accel/accel.sh@21 -- # val=32 00:05:40.263 06:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.263 06:00:46 -- accel/accel.sh@21 -- # val=1 00:05:40.263 06:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.263 06:00:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:40.263 06:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.263 06:00:46 -- accel/accel.sh@21 -- # val=Yes 00:05:40.263 06:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.263 06:00:46 -- accel/accel.sh@21 -- # val= 00:05:40.263 06:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:40.263 06:00:46 -- accel/accel.sh@21 -- # val= 00:05:40.263 06:00:46 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # IFS=: 00:05:40.263 06:00:46 -- accel/accel.sh@20 -- # read -r var val 00:05:41.633 06:00:47 -- accel/accel.sh@21 -- # val= 00:05:41.633 06:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.633 06:00:47 -- accel/accel.sh@20 -- # IFS=: 00:05:41.633 06:00:47 -- accel/accel.sh@20 -- # read -r var val 00:05:41.633 06:00:47 -- accel/accel.sh@21 -- # val= 00:05:41.633 06:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.633 06:00:47 -- accel/accel.sh@20 -- # IFS=: 00:05:41.633 06:00:47 -- accel/accel.sh@20 -- # read -r var val 00:05:41.633 06:00:47 -- accel/accel.sh@21 -- # val= 00:05:41.633 06:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.633 06:00:47 -- accel/accel.sh@20 -- # IFS=: 00:05:41.633 06:00:47 -- accel/accel.sh@20 -- # read -r var val 00:05:41.633 06:00:47 -- accel/accel.sh@21 -- # val= 00:05:41.633 06:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.633 06:00:47 -- accel/accel.sh@20 -- # IFS=: 00:05:41.633 06:00:47 -- accel/accel.sh@20 -- # read -r var val 00:05:41.633 06:00:47 -- accel/accel.sh@21 -- # val= 00:05:41.633 06:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.633 06:00:47 -- accel/accel.sh@20 -- # IFS=: 00:05:41.633 06:00:47 -- accel/accel.sh@20 -- # read -r var val 00:05:41.633 06:00:47 -- accel/accel.sh@21 -- # val= 00:05:41.633 06:00:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:41.633 06:00:47 -- accel/accel.sh@20 -- # IFS=: 00:05:41.633 06:00:47 -- accel/accel.sh@20 -- # read -r var val 00:05:41.633 06:00:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:41.633 06:00:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:41.634 06:00:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.634 00:05:41.634 real 0m2.965s 00:05:41.634 user 0m2.667s 00:05:41.634 sys 0m0.290s 00:05:41.634 06:00:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.634 06:00:47 -- common/autotest_common.sh@10 -- # set +x 00:05:41.634 ************************************ 00:05:41.634 END TEST accel_decomp 00:05:41.634 ************************************ 00:05:41.634 06:00:47 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:41.634 06:00:47 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:05:41.634 06:00:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.634 06:00:47 -- common/autotest_common.sh@10 -- # set +x 00:05:41.634 ************************************ 00:05:41.634 START TEST accel_decmop_full 00:05:41.634 ************************************ 00:05:41.634 06:00:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:41.634 06:00:47 -- accel/accel.sh@16 -- # local accel_opc 00:05:41.634 06:00:47 -- accel/accel.sh@17 -- # local accel_module 00:05:41.634 06:00:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:41.634 06:00:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:41.634 06:00:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.634 06:00:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:41.634 06:00:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.634 06:00:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.634 06:00:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:41.634 06:00:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:41.634 06:00:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:41.634 06:00:47 -- accel/accel.sh@42 -- # jq -r . 00:05:41.634 [2024-07-13 06:00:47.990024] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:41.634 [2024-07-13 06:00:47.990103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009143 ] 00:05:41.634 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.634 [2024-07-13 06:00:48.052561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.891 [2024-07-13 06:00:48.174112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.274 06:00:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:43.274 00:05:43.274 SPDK Configuration: 00:05:43.274 Core mask: 0x1 00:05:43.274 00:05:43.274 Accel Perf Configuration: 00:05:43.274 Workload Type: decompress 00:05:43.274 Transfer size: 111250 bytes 00:05:43.274 Vector count 1 00:05:43.274 Module: software 00:05:43.274 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:43.274 Queue depth: 32 00:05:43.274 Allocate depth: 32 00:05:43.274 # threads/core: 1 00:05:43.274 Run time: 1 seconds 00:05:43.274 Verify: Yes 00:05:43.274 00:05:43.274 Running for 1 seconds... 00:05:43.274 00:05:43.274 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:43.274 ------------------------------------------------------------------------------------ 00:05:43.274 0,0 3808/s 157 MiB/s 0 0 00:05:43.274 ==================================================================================== 00:05:43.274 Total 3808/s 404 MiB/s 0 0' 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.274 06:00:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.274 06:00:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:43.274 06:00:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.274 06:00:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:43.274 06:00:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.274 06:00:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.274 06:00:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:43.274 06:00:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:43.274 06:00:49 -- accel/accel.sh@41 -- # local IFS=, 00:05:43.274 06:00:49 -- accel/accel.sh@42 -- # jq -r . 00:05:43.274 [2024-07-13 06:00:49.485186] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:43.274 [2024-07-13 06:00:49.485270] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009401 ] 00:05:43.274 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.274 [2024-07-13 06:00:49.551424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.274 [2024-07-13 06:00:49.671473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.274 06:00:49 -- accel/accel.sh@21 -- # val= 00:05:43.274 06:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.274 06:00:49 -- accel/accel.sh@21 -- # val= 00:05:43.274 06:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.274 06:00:49 -- accel/accel.sh@21 -- # val= 00:05:43.274 06:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.274 06:00:49 -- accel/accel.sh@21 -- # val=0x1 00:05:43.274 06:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.274 06:00:49 -- accel/accel.sh@21 -- # val= 00:05:43.274 06:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.274 06:00:49 -- accel/accel.sh@21 -- # val= 00:05:43.274 06:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.274 06:00:49 -- accel/accel.sh@21 -- # val=decompress 00:05:43.274 06:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.274 06:00:49 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.274 06:00:49 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:43.274 06:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.274 06:00:49 -- accel/accel.sh@21 -- # val= 00:05:43.274 06:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.274 06:00:49 -- accel/accel.sh@21 -- # val=software 00:05:43.274 06:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.274 06:00:49 -- accel/accel.sh@23 -- # accel_module=software 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.274 06:00:49 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:43.274 06:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.274 06:00:49 -- accel/accel.sh@21 -- # val=32 00:05:43.274 06:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.274 06:00:49 -- accel/accel.sh@21 -- # val=32 00:05:43.274 06:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.274 06:00:49 -- accel/accel.sh@21 -- # val=1 00:05:43.274 06:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.274 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.274 06:00:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:43.274 06:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.275 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.275 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.275 06:00:49 -- accel/accel.sh@21 -- # val=Yes 00:05:43.275 06:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.275 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.275 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.275 06:00:49 -- accel/accel.sh@21 -- # val= 00:05:43.275 06:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.275 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.275 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:43.275 06:00:49 -- accel/accel.sh@21 -- # val= 00:05:43.275 06:00:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.275 06:00:49 -- accel/accel.sh@20 -- # IFS=: 00:05:43.275 06:00:49 -- accel/accel.sh@20 -- # read -r var val 00:05:44.697 06:00:50 -- accel/accel.sh@21 -- # val= 00:05:44.697 06:00:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.697 06:00:50 -- accel/accel.sh@20 -- # IFS=: 00:05:44.697 06:00:50 -- accel/accel.sh@20 -- # read -r var val 00:05:44.697 06:00:50 -- accel/accel.sh@21 -- # val= 00:05:44.697 06:00:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.697 06:00:50 -- accel/accel.sh@20 -- # IFS=: 00:05:44.697 06:00:50 -- accel/accel.sh@20 -- # read -r var val 00:05:44.697 06:00:50 -- accel/accel.sh@21 -- # val= 00:05:44.697 06:00:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.697 06:00:50 -- accel/accel.sh@20 -- # IFS=: 00:05:44.697 06:00:50 -- accel/accel.sh@20 -- # read -r var val 00:05:44.697 06:00:50 -- accel/accel.sh@21 -- # val= 00:05:44.697 06:00:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.697 06:00:50 -- accel/accel.sh@20 -- # IFS=: 00:05:44.697 06:00:50 -- accel/accel.sh@20 -- # read -r var val 00:05:44.697 06:00:50 -- accel/accel.sh@21 -- # val= 00:05:44.697 06:00:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.697 06:00:50 -- accel/accel.sh@20 -- # IFS=: 00:05:44.697 06:00:50 -- accel/accel.sh@20 -- # read -r var val 00:05:44.697 06:00:50 -- accel/accel.sh@21 -- # val= 00:05:44.697 06:00:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.697 06:00:50 -- accel/accel.sh@20 -- # IFS=: 00:05:44.697 06:00:50 -- accel/accel.sh@20 -- # read -r var val 00:05:44.697 06:00:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:44.697 06:00:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:44.697 06:00:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.697 00:05:44.697 real 0m3.002s 00:05:44.697 user 0m2.687s 00:05:44.697 sys 0m0.307s 00:05:44.697 06:00:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.697 06:00:50 -- common/autotest_common.sh@10 -- # set +x 00:05:44.697 ************************************ 00:05:44.697 END TEST accel_decmop_full 00:05:44.697 ************************************ 00:05:44.697 06:00:50 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:44.697 06:00:50 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:05:44.697 06:00:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.697 06:00:50 -- common/autotest_common.sh@10 -- # set +x 00:05:44.697 ************************************ 00:05:44.697 START TEST accel_decomp_mcore 00:05:44.697 ************************************ 00:05:44.697 06:00:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:44.697 06:00:50 -- accel/accel.sh@16 -- # local accel_opc 00:05:44.697 06:00:50 -- accel/accel.sh@17 -- # local accel_module 00:05:44.697 06:00:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:44.697 06:00:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:44.697 06:00:50 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.697 06:00:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.697 06:00:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.697 06:00:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.697 06:00:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.697 06:00:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.697 06:00:50 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.697 06:00:50 -- accel/accel.sh@42 -- # jq -r . 00:05:44.697 [2024-07-13 06:00:51.014963] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:44.697 [2024-07-13 06:00:51.015040] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009563 ] 00:05:44.697 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.697 [2024-07-13 06:00:51.078135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:44.697 [2024-07-13 06:00:51.202153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.697 [2024-07-13 06:00:51.202208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.697 [2024-07-13 06:00:51.202260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.697 [2024-07-13 06:00:51.202264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.075 06:00:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:46.075 00:05:46.075 SPDK Configuration: 00:05:46.075 Core mask: 0xf 00:05:46.075 00:05:46.075 Accel Perf Configuration: 00:05:46.075 Workload Type: decompress 00:05:46.075 Transfer size: 4096 bytes 00:05:46.075 Vector count 1 00:05:46.075 Module: software 00:05:46.075 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:46.075 Queue depth: 32 00:05:46.075 Allocate depth: 32 00:05:46.075 # threads/core: 1 00:05:46.075 Run time: 1 seconds 00:05:46.075 Verify: Yes 00:05:46.075 00:05:46.075 Running for 1 seconds... 00:05:46.075 00:05:46.075 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:46.075 ------------------------------------------------------------------------------------ 00:05:46.075 0,0 50336/s 92 MiB/s 0 0 00:05:46.075 3,0 50880/s 93 MiB/s 0 0 00:05:46.075 2,0 50656/s 93 MiB/s 0 0 00:05:46.075 1,0 50656/s 93 MiB/s 0 0 00:05:46.075 ==================================================================================== 00:05:46.075 Total 202528/s 791 MiB/s 0 0' 00:05:46.075 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.075 06:00:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:46.075 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.075 06:00:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:46.075 06:00:52 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.075 06:00:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.075 06:00:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.075 06:00:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.075 06:00:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.075 06:00:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.075 06:00:52 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.075 06:00:52 -- accel/accel.sh@42 -- # jq -r . 00:05:46.075 [2024-07-13 06:00:52.517253] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:46.075 [2024-07-13 06:00:52.517333] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009710 ] 00:05:46.075 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.075 [2024-07-13 06:00:52.579057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.334 [2024-07-13 06:00:52.701376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.334 [2024-07-13 06:00:52.701406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.334 [2024-07-13 06:00:52.701433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.334 [2024-07-13 06:00:52.701436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.334 06:00:52 -- accel/accel.sh@21 -- # val= 00:05:46.334 06:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.334 06:00:52 -- accel/accel.sh@21 -- # val= 00:05:46.334 06:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.334 06:00:52 -- accel/accel.sh@21 -- # val= 00:05:46.334 06:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.334 06:00:52 -- accel/accel.sh@21 -- # val=0xf 00:05:46.334 06:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.334 06:00:52 -- accel/accel.sh@21 -- # val= 00:05:46.334 06:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.334 06:00:52 -- accel/accel.sh@21 -- # val= 00:05:46.334 06:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.334 06:00:52 -- accel/accel.sh@21 -- # val=decompress 00:05:46.334 06:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.334 06:00:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.334 06:00:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:46.334 06:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.334 06:00:52 -- accel/accel.sh@21 -- # val= 00:05:46.334 06:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.334 06:00:52 -- accel/accel.sh@21 -- # val=software 00:05:46.334 06:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.334 06:00:52 -- accel/accel.sh@23 -- # accel_module=software 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.334 06:00:52 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:46.334 06:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.334 06:00:52 -- accel/accel.sh@21 -- # val=32 00:05:46.334 06:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.334 06:00:52 -- accel/accel.sh@21 -- # val=32 00:05:46.334 06:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.334 06:00:52 -- accel/accel.sh@21 -- # val=1 00:05:46.334 06:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.334 06:00:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:46.334 06:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.334 06:00:52 -- accel/accel.sh@21 -- # val=Yes 00:05:46.334 06:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.334 06:00:52 -- accel/accel.sh@21 -- # val= 00:05:46.334 06:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:46.334 06:00:52 -- accel/accel.sh@21 -- # val= 00:05:46.334 06:00:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # IFS=: 00:05:46.334 06:00:52 -- accel/accel.sh@20 -- # read -r var val 00:05:47.704 06:00:53 -- accel/accel.sh@21 -- # val= 00:05:47.704 06:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.704 06:00:53 -- accel/accel.sh@20 -- # IFS=: 00:05:47.704 06:00:53 -- accel/accel.sh@20 -- # read -r var val 00:05:47.704 06:00:53 -- accel/accel.sh@21 -- # val= 00:05:47.704 06:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.704 06:00:53 -- accel/accel.sh@20 -- # IFS=: 00:05:47.704 06:00:53 -- accel/accel.sh@20 -- # read -r var val 00:05:47.704 06:00:53 -- accel/accel.sh@21 -- # val= 00:05:47.704 06:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.704 06:00:53 -- accel/accel.sh@20 -- # IFS=: 00:05:47.704 06:00:53 -- accel/accel.sh@20 -- # read -r var val 00:05:47.704 06:00:53 -- accel/accel.sh@21 -- # val= 00:05:47.704 06:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.704 06:00:53 -- accel/accel.sh@20 -- # IFS=: 00:05:47.704 06:00:53 -- accel/accel.sh@20 -- # read -r var val 00:05:47.704 06:00:53 -- accel/accel.sh@21 -- # val= 00:05:47.704 06:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.704 06:00:53 -- accel/accel.sh@20 -- # IFS=: 00:05:47.704 06:00:53 -- accel/accel.sh@20 -- # read -r var val 00:05:47.704 06:00:53 -- accel/accel.sh@21 -- # val= 00:05:47.704 06:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.704 06:00:53 -- accel/accel.sh@20 -- # IFS=: 00:05:47.704 06:00:53 -- accel/accel.sh@20 -- # read -r var val 00:05:47.704 06:00:53 -- accel/accel.sh@21 -- # val= 00:05:47.704 06:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.704 06:00:53 -- accel/accel.sh@20 -- # IFS=: 00:05:47.704 06:00:53 -- accel/accel.sh@20 -- # read -r var val 00:05:47.704 06:00:53 -- accel/accel.sh@21 -- # val= 00:05:47.704 06:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.704 06:00:53 -- accel/accel.sh@20 -- # IFS=: 00:05:47.704 06:00:53 -- accel/accel.sh@20 -- # read -r var val 00:05:47.704 06:00:53 -- accel/accel.sh@21 -- # val= 00:05:47.704 06:00:53 -- accel/accel.sh@22 -- # case "$var" in 00:05:47.704 06:00:53 -- accel/accel.sh@20 -- # IFS=: 00:05:47.704 06:00:53 -- accel/accel.sh@20 -- # read -r var val 00:05:47.704 06:00:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:47.704 06:00:53 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:47.704 06:00:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.704 00:05:47.704 real 0m2.990s 00:05:47.704 user 0m9.609s 00:05:47.704 sys 0m0.309s 00:05:47.705 06:00:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.705 06:00:53 -- common/autotest_common.sh@10 -- # set +x 00:05:47.705 ************************************ 00:05:47.705 END TEST accel_decomp_mcore 00:05:47.705 ************************************ 00:05:47.705 06:00:54 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:47.705 06:00:54 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:47.705 06:00:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.705 06:00:54 -- common/autotest_common.sh@10 -- # set +x 00:05:47.705 ************************************ 00:05:47.705 START TEST accel_decomp_full_mcore 00:05:47.705 ************************************ 00:05:47.705 06:00:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:47.705 06:00:54 -- accel/accel.sh@16 -- # local accel_opc 00:05:47.705 06:00:54 -- accel/accel.sh@17 -- # local accel_module 00:05:47.705 06:00:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:47.705 06:00:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:47.705 06:00:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:47.705 06:00:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:47.705 06:00:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.705 06:00:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.705 06:00:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:47.705 06:00:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:47.705 06:00:54 -- accel/accel.sh@41 -- # local IFS=, 00:05:47.705 06:00:54 -- accel/accel.sh@42 -- # jq -r . 00:05:47.705 [2024-07-13 06:00:54.034575] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:47.705 [2024-07-13 06:00:54.034656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1009997 ] 00:05:47.705 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.705 [2024-07-13 06:00:54.097334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:47.962 [2024-07-13 06:00:54.221925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.962 [2024-07-13 06:00:54.221980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.962 [2024-07-13 06:00:54.222034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.962 [2024-07-13 06:00:54.222037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.334 06:00:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:49.334 00:05:49.334 SPDK Configuration: 00:05:49.334 Core mask: 0xf 00:05:49.334 00:05:49.334 Accel Perf Configuration: 00:05:49.334 Workload Type: decompress 00:05:49.334 Transfer size: 111250 bytes 00:05:49.334 Vector count 1 00:05:49.334 Module: software 00:05:49.334 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:49.334 Queue depth: 32 00:05:49.334 Allocate depth: 32 00:05:49.334 # threads/core: 1 00:05:49.334 Run time: 1 seconds 00:05:49.334 Verify: Yes 00:05:49.334 00:05:49.334 Running for 1 seconds... 00:05:49.334 00:05:49.334 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:49.334 ------------------------------------------------------------------------------------ 00:05:49.334 0,0 3776/s 155 MiB/s 0 0 00:05:49.334 3,0 3776/s 155 MiB/s 0 0 00:05:49.334 2,0 3776/s 155 MiB/s 0 0 00:05:49.334 1,0 3776/s 155 MiB/s 0 0 00:05:49.334 ==================================================================================== 00:05:49.334 Total 15104/s 1602 MiB/s 0 0' 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.334 06:00:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.334 06:00:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:49.334 06:00:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.334 06:00:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.334 06:00:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.334 06:00:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.334 06:00:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.334 06:00:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.334 06:00:55 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.334 06:00:55 -- accel/accel.sh@42 -- # jq -r . 00:05:49.334 [2024-07-13 06:00:55.545847] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:49.334 [2024-07-13 06:00:55.545941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010138 ] 00:05:49.334 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.334 [2024-07-13 06:00:55.608309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.334 [2024-07-13 06:00:55.732030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.334 [2024-07-13 06:00:55.732076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.334 [2024-07-13 06:00:55.732131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.334 [2024-07-13 06:00:55.732135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.334 06:00:55 -- accel/accel.sh@21 -- # val= 00:05:49.334 06:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.334 06:00:55 -- accel/accel.sh@21 -- # val= 00:05:49.334 06:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.334 06:00:55 -- accel/accel.sh@21 -- # val= 00:05:49.334 06:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.334 06:00:55 -- accel/accel.sh@21 -- # val=0xf 00:05:49.334 06:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.334 06:00:55 -- accel/accel.sh@21 -- # val= 00:05:49.334 06:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.334 06:00:55 -- accel/accel.sh@21 -- # val= 00:05:49.334 06:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.334 06:00:55 -- accel/accel.sh@21 -- # val=decompress 00:05:49.334 06:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.334 06:00:55 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.334 06:00:55 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:49.334 06:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.334 06:00:55 -- accel/accel.sh@21 -- # val= 00:05:49.334 06:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.334 06:00:55 -- accel/accel.sh@21 -- # val=software 00:05:49.334 06:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.334 06:00:55 -- accel/accel.sh@23 -- # accel_module=software 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.334 06:00:55 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:49.334 06:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.334 06:00:55 -- accel/accel.sh@21 -- # val=32 00:05:49.334 06:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.334 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.334 06:00:55 -- accel/accel.sh@21 -- # val=32 00:05:49.334 06:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.335 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.335 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.335 06:00:55 -- accel/accel.sh@21 -- # val=1 00:05:49.335 06:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.335 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.335 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.335 06:00:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:49.335 06:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.335 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.335 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.335 06:00:55 -- accel/accel.sh@21 -- # val=Yes 00:05:49.335 06:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.335 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.335 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.335 06:00:55 -- accel/accel.sh@21 -- # val= 00:05:49.335 06:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.335 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.335 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:49.335 06:00:55 -- accel/accel.sh@21 -- # val= 00:05:49.335 06:00:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.335 06:00:55 -- accel/accel.sh@20 -- # IFS=: 00:05:49.335 06:00:55 -- accel/accel.sh@20 -- # read -r var val 00:05:50.707 06:00:57 -- accel/accel.sh@21 -- # val= 00:05:50.707 06:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.707 06:00:57 -- accel/accel.sh@20 -- # IFS=: 00:05:50.707 06:00:57 -- accel/accel.sh@20 -- # read -r var val 00:05:50.707 06:00:57 -- accel/accel.sh@21 -- # val= 00:05:50.707 06:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.707 06:00:57 -- accel/accel.sh@20 -- # IFS=: 00:05:50.707 06:00:57 -- accel/accel.sh@20 -- # read -r var val 00:05:50.707 06:00:57 -- accel/accel.sh@21 -- # val= 00:05:50.707 06:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.707 06:00:57 -- accel/accel.sh@20 -- # IFS=: 00:05:50.707 06:00:57 -- accel/accel.sh@20 -- # read -r var val 00:05:50.707 06:00:57 -- accel/accel.sh@21 -- # val= 00:05:50.707 06:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.707 06:00:57 -- accel/accel.sh@20 -- # IFS=: 00:05:50.707 06:00:57 -- accel/accel.sh@20 -- # read -r var val 00:05:50.707 06:00:57 -- accel/accel.sh@21 -- # val= 00:05:50.707 06:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.707 06:00:57 -- accel/accel.sh@20 -- # IFS=: 00:05:50.707 06:00:57 -- accel/accel.sh@20 -- # read -r var val 00:05:50.707 06:00:57 -- accel/accel.sh@21 -- # val= 00:05:50.707 06:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.707 06:00:57 -- accel/accel.sh@20 -- # IFS=: 00:05:50.707 06:00:57 -- accel/accel.sh@20 -- # read -r var val 00:05:50.707 06:00:57 -- accel/accel.sh@21 -- # val= 00:05:50.707 06:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.707 06:00:57 -- accel/accel.sh@20 -- # IFS=: 00:05:50.707 06:00:57 -- accel/accel.sh@20 -- # read -r var val 00:05:50.707 06:00:57 -- accel/accel.sh@21 -- # val= 00:05:50.707 06:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.707 06:00:57 -- accel/accel.sh@20 -- # IFS=: 00:05:50.707 06:00:57 -- accel/accel.sh@20 -- # read -r var val 00:05:50.707 06:00:57 -- accel/accel.sh@21 -- # val= 00:05:50.707 06:00:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:50.707 06:00:57 -- accel/accel.sh@20 -- # IFS=: 00:05:50.707 06:00:57 -- accel/accel.sh@20 -- # read -r var val 00:05:50.707 06:00:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:50.707 06:00:57 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:50.707 06:00:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.707 00:05:50.707 real 0m3.024s 00:05:50.707 user 0m9.736s 00:05:50.707 sys 0m0.295s 00:05:50.707 06:00:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.707 06:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:50.707 ************************************ 00:05:50.707 END TEST accel_decomp_full_mcore 00:05:50.707 ************************************ 00:05:50.707 06:00:57 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:50.708 06:00:57 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:05:50.708 06:00:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.708 06:00:57 -- common/autotest_common.sh@10 -- # set +x 00:05:50.708 ************************************ 00:05:50.708 START TEST accel_decomp_mthread 00:05:50.708 ************************************ 00:05:50.708 06:00:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:50.708 06:00:57 -- accel/accel.sh@16 -- # local accel_opc 00:05:50.708 06:00:57 -- accel/accel.sh@17 -- # local accel_module 00:05:50.708 06:00:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:50.708 06:00:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:50.708 06:00:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:50.708 06:00:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:50.708 06:00:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.708 06:00:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.708 06:00:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:50.708 06:00:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:50.708 06:00:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:50.708 06:00:57 -- accel/accel.sh@42 -- # jq -r . 00:05:50.708 [2024-07-13 06:00:57.090278] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:50.708 [2024-07-13 06:00:57.090365] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010304 ] 00:05:50.708 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.708 [2024-07-13 06:00:57.154446] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.965 [2024-07-13 06:00:57.280008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.334 06:00:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:52.334 00:05:52.334 SPDK Configuration: 00:05:52.334 Core mask: 0x1 00:05:52.334 00:05:52.335 Accel Perf Configuration: 00:05:52.335 Workload Type: decompress 00:05:52.335 Transfer size: 4096 bytes 00:05:52.335 Vector count 1 00:05:52.335 Module: software 00:05:52.335 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:52.335 Queue depth: 32 00:05:52.335 Allocate depth: 32 00:05:52.335 # threads/core: 2 00:05:52.335 Run time: 1 seconds 00:05:52.335 Verify: Yes 00:05:52.335 00:05:52.335 Running for 1 seconds... 00:05:52.335 00:05:52.335 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:52.335 ------------------------------------------------------------------------------------ 00:05:52.335 0,1 28128/s 51 MiB/s 0 0 00:05:52.335 0,0 28000/s 51 MiB/s 0 0 00:05:52.335 ==================================================================================== 00:05:52.335 Total 56128/s 219 MiB/s 0 0' 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.335 06:00:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:52.335 06:00:58 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.335 06:00:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.335 06:00:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.335 06:00:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.335 06:00:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.335 06:00:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.335 06:00:58 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.335 06:00:58 -- accel/accel.sh@42 -- # jq -r . 00:05:52.335 [2024-07-13 06:00:58.576130] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:52.335 [2024-07-13 06:00:58.576216] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010569 ] 00:05:52.335 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.335 [2024-07-13 06:00:58.638320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.335 [2024-07-13 06:00:58.758248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.335 06:00:58 -- accel/accel.sh@21 -- # val= 00:05:52.335 06:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.335 06:00:58 -- accel/accel.sh@21 -- # val= 00:05:52.335 06:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.335 06:00:58 -- accel/accel.sh@21 -- # val= 00:05:52.335 06:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.335 06:00:58 -- accel/accel.sh@21 -- # val=0x1 00:05:52.335 06:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.335 06:00:58 -- accel/accel.sh@21 -- # val= 00:05:52.335 06:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.335 06:00:58 -- accel/accel.sh@21 -- # val= 00:05:52.335 06:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.335 06:00:58 -- accel/accel.sh@21 -- # val=decompress 00:05:52.335 06:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.335 06:00:58 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.335 06:00:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:52.335 06:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.335 06:00:58 -- accel/accel.sh@21 -- # val= 00:05:52.335 06:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.335 06:00:58 -- accel/accel.sh@21 -- # val=software 00:05:52.335 06:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.335 06:00:58 -- accel/accel.sh@23 -- # accel_module=software 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.335 06:00:58 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:52.335 06:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.335 06:00:58 -- accel/accel.sh@21 -- # val=32 00:05:52.335 06:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.335 06:00:58 -- accel/accel.sh@21 -- # val=32 00:05:52.335 06:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.335 06:00:58 -- accel/accel.sh@21 -- # val=2 00:05:52.335 06:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.335 06:00:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:52.335 06:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.335 06:00:58 -- accel/accel.sh@21 -- # val=Yes 00:05:52.335 06:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.335 06:00:58 -- accel/accel.sh@21 -- # val= 00:05:52.335 06:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:52.335 06:00:58 -- accel/accel.sh@21 -- # val= 00:05:52.335 06:00:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # IFS=: 00:05:52.335 06:00:58 -- accel/accel.sh@20 -- # read -r var val 00:05:53.706 06:01:00 -- accel/accel.sh@21 -- # val= 00:05:53.706 06:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.706 06:01:00 -- accel/accel.sh@20 -- # IFS=: 00:05:53.706 06:01:00 -- accel/accel.sh@20 -- # read -r var val 00:05:53.706 06:01:00 -- accel/accel.sh@21 -- # val= 00:05:53.706 06:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.706 06:01:00 -- accel/accel.sh@20 -- # IFS=: 00:05:53.706 06:01:00 -- accel/accel.sh@20 -- # read -r var val 00:05:53.706 06:01:00 -- accel/accel.sh@21 -- # val= 00:05:53.706 06:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.706 06:01:00 -- accel/accel.sh@20 -- # IFS=: 00:05:53.706 06:01:00 -- accel/accel.sh@20 -- # read -r var val 00:05:53.706 06:01:00 -- accel/accel.sh@21 -- # val= 00:05:53.706 06:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.706 06:01:00 -- accel/accel.sh@20 -- # IFS=: 00:05:53.706 06:01:00 -- accel/accel.sh@20 -- # read -r var val 00:05:53.706 06:01:00 -- accel/accel.sh@21 -- # val= 00:05:53.706 06:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.706 06:01:00 -- accel/accel.sh@20 -- # IFS=: 00:05:53.706 06:01:00 -- accel/accel.sh@20 -- # read -r var val 00:05:53.706 06:01:00 -- accel/accel.sh@21 -- # val= 00:05:53.706 06:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.706 06:01:00 -- accel/accel.sh@20 -- # IFS=: 00:05:53.706 06:01:00 -- accel/accel.sh@20 -- # read -r var val 00:05:53.706 06:01:00 -- accel/accel.sh@21 -- # val= 00:05:53.706 06:01:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.706 06:01:00 -- accel/accel.sh@20 -- # IFS=: 00:05:53.706 06:01:00 -- accel/accel.sh@20 -- # read -r var val 00:05:53.706 06:01:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:53.706 06:01:00 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:53.706 06:01:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.706 00:05:53.706 real 0m2.983s 00:05:53.706 user 0m2.673s 00:05:53.706 sys 0m0.302s 00:05:53.706 06:01:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.706 06:01:00 -- common/autotest_common.sh@10 -- # set +x 00:05:53.706 ************************************ 00:05:53.706 END TEST accel_decomp_mthread 00:05:53.706 ************************************ 00:05:53.706 06:01:00 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:53.706 06:01:00 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:53.706 06:01:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:53.706 06:01:00 -- common/autotest_common.sh@10 -- # set +x 00:05:53.706 ************************************ 00:05:53.706 START TEST accel_deomp_full_mthread 00:05:53.706 ************************************ 00:05:53.706 06:01:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:53.706 06:01:00 -- accel/accel.sh@16 -- # local accel_opc 00:05:53.706 06:01:00 -- accel/accel.sh@17 -- # local accel_module 00:05:53.707 06:01:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:53.707 06:01:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:53.707 06:01:00 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.707 06:01:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.707 06:01:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.707 06:01:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.707 06:01:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.707 06:01:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.707 06:01:00 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.707 06:01:00 -- accel/accel.sh@42 -- # jq -r . 00:05:53.707 [2024-07-13 06:01:00.096227] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:53.707 [2024-07-13 06:01:00.096308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010730 ] 00:05:53.707 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.707 [2024-07-13 06:01:00.162181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.964 [2024-07-13 06:01:00.283634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.334 06:01:01 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:55.334 00:05:55.334 SPDK Configuration: 00:05:55.334 Core mask: 0x1 00:05:55.334 00:05:55.334 Accel Perf Configuration: 00:05:55.334 Workload Type: decompress 00:05:55.334 Transfer size: 111250 bytes 00:05:55.334 Vector count 1 00:05:55.334 Module: software 00:05:55.334 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:55.334 Queue depth: 32 00:05:55.334 Allocate depth: 32 00:05:55.334 # threads/core: 2 00:05:55.334 Run time: 1 seconds 00:05:55.334 Verify: Yes 00:05:55.334 00:05:55.334 Running for 1 seconds... 00:05:55.334 00:05:55.335 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:55.335 ------------------------------------------------------------------------------------ 00:05:55.335 0,1 1952/s 80 MiB/s 0 0 00:05:55.335 0,0 1920/s 79 MiB/s 0 0 00:05:55.335 ==================================================================================== 00:05:55.335 Total 3872/s 410 MiB/s 0 0' 00:05:55.335 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.335 06:01:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:55.335 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.335 06:01:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:55.335 06:01:01 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.335 06:01:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.335 06:01:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.335 06:01:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.335 06:01:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.335 06:01:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.335 06:01:01 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.335 06:01:01 -- accel/accel.sh@42 -- # jq -r . 00:05:55.335 [2024-07-13 06:01:01.626101] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:55.335 [2024-07-13 06:01:01.626185] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010868 ] 00:05:55.335 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.335 [2024-07-13 06:01:01.687916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.335 [2024-07-13 06:01:01.807404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.592 06:01:01 -- accel/accel.sh@21 -- # val= 00:05:55.592 06:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.592 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.592 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.592 06:01:01 -- accel/accel.sh@21 -- # val= 00:05:55.592 06:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.592 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.592 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.593 06:01:01 -- accel/accel.sh@21 -- # val= 00:05:55.593 06:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.593 06:01:01 -- accel/accel.sh@21 -- # val=0x1 00:05:55.593 06:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.593 06:01:01 -- accel/accel.sh@21 -- # val= 00:05:55.593 06:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.593 06:01:01 -- accel/accel.sh@21 -- # val= 00:05:55.593 06:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.593 06:01:01 -- accel/accel.sh@21 -- # val=decompress 00:05:55.593 06:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.593 06:01:01 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.593 06:01:01 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:55.593 06:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.593 06:01:01 -- accel/accel.sh@21 -- # val= 00:05:55.593 06:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.593 06:01:01 -- accel/accel.sh@21 -- # val=software 00:05:55.593 06:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.593 06:01:01 -- accel/accel.sh@23 -- # accel_module=software 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.593 06:01:01 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:55.593 06:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.593 06:01:01 -- accel/accel.sh@21 -- # val=32 00:05:55.593 06:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.593 06:01:01 -- accel/accel.sh@21 -- # val=32 00:05:55.593 06:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.593 06:01:01 -- accel/accel.sh@21 -- # val=2 00:05:55.593 06:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.593 06:01:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:55.593 06:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.593 06:01:01 -- accel/accel.sh@21 -- # val=Yes 00:05:55.593 06:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.593 06:01:01 -- accel/accel.sh@21 -- # val= 00:05:55.593 06:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:55.593 06:01:01 -- accel/accel.sh@21 -- # val= 00:05:55.593 06:01:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # IFS=: 00:05:55.593 06:01:01 -- accel/accel.sh@20 -- # read -r var val 00:05:56.963 06:01:03 -- accel/accel.sh@21 -- # val= 00:05:56.963 06:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.963 06:01:03 -- accel/accel.sh@20 -- # IFS=: 00:05:56.963 06:01:03 -- accel/accel.sh@20 -- # read -r var val 00:05:56.963 06:01:03 -- accel/accel.sh@21 -- # val= 00:05:56.963 06:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.963 06:01:03 -- accel/accel.sh@20 -- # IFS=: 00:05:56.963 06:01:03 -- accel/accel.sh@20 -- # read -r var val 00:05:56.963 06:01:03 -- accel/accel.sh@21 -- # val= 00:05:56.963 06:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.963 06:01:03 -- accel/accel.sh@20 -- # IFS=: 00:05:56.963 06:01:03 -- accel/accel.sh@20 -- # read -r var val 00:05:56.963 06:01:03 -- accel/accel.sh@21 -- # val= 00:05:56.963 06:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.963 06:01:03 -- accel/accel.sh@20 -- # IFS=: 00:05:56.963 06:01:03 -- accel/accel.sh@20 -- # read -r var val 00:05:56.963 06:01:03 -- accel/accel.sh@21 -- # val= 00:05:56.963 06:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.963 06:01:03 -- accel/accel.sh@20 -- # IFS=: 00:05:56.963 06:01:03 -- accel/accel.sh@20 -- # read -r var val 00:05:56.963 06:01:03 -- accel/accel.sh@21 -- # val= 00:05:56.963 06:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.963 06:01:03 -- accel/accel.sh@20 -- # IFS=: 00:05:56.963 06:01:03 -- accel/accel.sh@20 -- # read -r var val 00:05:56.963 06:01:03 -- accel/accel.sh@21 -- # val= 00:05:56.963 06:01:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.963 06:01:03 -- accel/accel.sh@20 -- # IFS=: 00:05:56.963 06:01:03 -- accel/accel.sh@20 -- # read -r var val 00:05:56.963 06:01:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:56.963 06:01:03 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:56.963 06:01:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:56.963 00:05:56.963 real 0m3.043s 00:05:56.963 user 0m2.746s 00:05:56.963 sys 0m0.289s 00:05:56.963 06:01:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.963 06:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:56.963 ************************************ 00:05:56.963 END TEST accel_deomp_full_mthread 00:05:56.963 ************************************ 00:05:56.963 06:01:03 -- accel/accel.sh@116 -- # [[ n == y ]] 00:05:56.963 06:01:03 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:56.963 06:01:03 -- accel/accel.sh@129 -- # build_accel_config 00:05:56.963 06:01:03 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:56.963 06:01:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.963 06:01:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.963 06:01:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.963 06:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:56.963 06:01:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.963 06:01:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.963 06:01:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.963 06:01:03 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.963 06:01:03 -- accel/accel.sh@42 -- # jq -r . 00:05:56.963 ************************************ 00:05:56.963 START TEST accel_dif_functional_tests 00:05:56.963 ************************************ 00:05:56.963 06:01:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:56.963 [2024-07-13 06:01:03.185942] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:56.963 [2024-07-13 06:01:03.186038] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011158 ] 00:05:56.963 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.963 [2024-07-13 06:01:03.254575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.963 [2024-07-13 06:01:03.377496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.963 [2024-07-13 06:01:03.377555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.963 [2024-07-13 06:01:03.377558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.222 00:05:57.222 00:05:57.222 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.222 http://cunit.sourceforge.net/ 00:05:57.222 00:05:57.222 00:05:57.222 Suite: accel_dif 00:05:57.222 Test: verify: DIF generated, GUARD check ...passed 00:05:57.222 Test: verify: DIF generated, APPTAG check ...passed 00:05:57.222 Test: verify: DIF generated, REFTAG check ...passed 00:05:57.222 Test: verify: DIF not generated, GUARD check ...[2024-07-13 06:01:03.480105] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:57.222 [2024-07-13 06:01:03.480174] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:57.222 passed 00:05:57.222 Test: verify: DIF not generated, APPTAG check ...[2024-07-13 06:01:03.480223] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:57.222 [2024-07-13 06:01:03.480254] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:57.222 passed 00:05:57.222 Test: verify: DIF not generated, REFTAG check ...[2024-07-13 06:01:03.480290] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:57.222 [2024-07-13 06:01:03.480319] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:57.222 passed 00:05:57.222 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:57.222 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-13 06:01:03.480389] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:57.222 passed 00:05:57.222 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:57.222 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:57.222 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:57.222 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-13 06:01:03.480545] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:57.222 passed 00:05:57.222 Test: generate copy: DIF generated, GUARD check ...passed 00:05:57.222 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:57.222 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:57.222 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:57.222 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:57.222 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:57.222 Test: generate copy: iovecs-len validate ...[2024-07-13 06:01:03.480806] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:57.222 passed 00:05:57.222 Test: generate copy: buffer alignment validate ...passed 00:05:57.222 00:05:57.222 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.222 suites 1 1 n/a 0 0 00:05:57.222 tests 20 20 20 0 0 00:05:57.222 asserts 204 204 204 0 n/a 00:05:57.222 00:05:57.222 Elapsed time = 0.003 seconds 00:05:57.478 00:05:57.478 real 0m0.604s 00:05:57.478 user 0m0.925s 00:05:57.478 sys 0m0.183s 00:05:57.478 06:01:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.479 06:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:57.479 ************************************ 00:05:57.479 END TEST accel_dif_functional_tests 00:05:57.479 ************************************ 00:05:57.479 00:05:57.479 real 1m3.064s 00:05:57.479 user 1m10.837s 00:05:57.479 sys 0m7.288s 00:05:57.479 06:01:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.479 06:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:57.479 ************************************ 00:05:57.479 END TEST accel 00:05:57.479 ************************************ 00:05:57.479 06:01:03 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:57.479 06:01:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:57.479 06:01:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.479 06:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:57.479 ************************************ 00:05:57.479 START TEST accel_rpc 00:05:57.479 ************************************ 00:05:57.479 06:01:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:57.479 * Looking for test storage... 00:05:57.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:57.479 06:01:03 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:57.479 06:01:03 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1011217 00:05:57.479 06:01:03 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:57.479 06:01:03 -- accel/accel_rpc.sh@15 -- # waitforlisten 1011217 00:05:57.479 06:01:03 -- common/autotest_common.sh@819 -- # '[' -z 1011217 ']' 00:05:57.479 06:01:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.479 06:01:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:57.479 06:01:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.479 06:01:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:57.479 06:01:03 -- common/autotest_common.sh@10 -- # set +x 00:05:57.479 [2024-07-13 06:01:03.902459] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:57.479 [2024-07-13 06:01:03.902536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011217 ] 00:05:57.479 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.479 [2024-07-13 06:01:03.959729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.736 [2024-07-13 06:01:04.069021] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.736 [2024-07-13 06:01:04.069169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.736 06:01:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:57.736 06:01:04 -- common/autotest_common.sh@852 -- # return 0 00:05:57.736 06:01:04 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:57.736 06:01:04 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:57.736 06:01:04 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:57.736 06:01:04 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:57.736 06:01:04 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:57.736 06:01:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:57.736 06:01:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.736 06:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:57.736 ************************************ 00:05:57.736 START TEST accel_assign_opcode 00:05:57.736 ************************************ 00:05:57.736 06:01:04 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:05:57.736 06:01:04 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:57.736 06:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.736 06:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:57.736 [2024-07-13 06:01:04.105661] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:57.736 06:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.736 06:01:04 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:57.736 06:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.736 06:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:57.736 [2024-07-13 06:01:04.113679] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:57.736 06:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.736 06:01:04 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:57.736 06:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.736 06:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:58.004 06:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:58.004 06:01:04 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:58.004 06:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:58.004 06:01:04 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:58.004 06:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:58.004 06:01:04 -- accel/accel_rpc.sh@42 -- # grep software 00:05:58.004 06:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:58.004 software 00:05:58.004 00:05:58.004 real 0m0.294s 00:05:58.004 user 0m0.036s 00:05:58.004 sys 0m0.007s 00:05:58.004 06:01:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.004 06:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:58.004 ************************************ 00:05:58.004 END TEST accel_assign_opcode 00:05:58.004 ************************************ 00:05:58.004 06:01:04 -- accel/accel_rpc.sh@55 -- # killprocess 1011217 00:05:58.004 06:01:04 -- common/autotest_common.sh@926 -- # '[' -z 1011217 ']' 00:05:58.004 06:01:04 -- common/autotest_common.sh@930 -- # kill -0 1011217 00:05:58.004 06:01:04 -- common/autotest_common.sh@931 -- # uname 00:05:58.004 06:01:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:58.004 06:01:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1011217 00:05:58.004 06:01:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:58.004 06:01:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:58.004 06:01:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1011217' 00:05:58.004 killing process with pid 1011217 00:05:58.004 06:01:04 -- common/autotest_common.sh@945 -- # kill 1011217 00:05:58.004 06:01:04 -- common/autotest_common.sh@950 -- # wait 1011217 00:05:58.572 00:05:58.572 real 0m1.122s 00:05:58.572 user 0m1.053s 00:05:58.572 sys 0m0.392s 00:05:58.572 06:01:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.572 06:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:58.572 ************************************ 00:05:58.572 END TEST accel_rpc 00:05:58.572 ************************************ 00:05:58.572 06:01:04 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:58.572 06:01:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:58.572 06:01:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.572 06:01:04 -- common/autotest_common.sh@10 -- # set +x 00:05:58.572 ************************************ 00:05:58.572 START TEST app_cmdline 00:05:58.572 ************************************ 00:05:58.572 06:01:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:58.572 * Looking for test storage... 00:05:58.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:58.572 06:01:05 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:58.572 06:01:05 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1011426 00:05:58.572 06:01:05 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:58.572 06:01:05 -- app/cmdline.sh@18 -- # waitforlisten 1011426 00:05:58.572 06:01:05 -- common/autotest_common.sh@819 -- # '[' -z 1011426 ']' 00:05:58.572 06:01:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.572 06:01:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:58.572 06:01:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.572 06:01:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:58.572 06:01:05 -- common/autotest_common.sh@10 -- # set +x 00:05:58.572 [2024-07-13 06:01:05.051109] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:58.572 [2024-07-13 06:01:05.051210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011426 ] 00:05:58.572 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.830 [2024-07-13 06:01:05.111203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.830 [2024-07-13 06:01:05.221030] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:58.830 [2024-07-13 06:01:05.221188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.762 06:01:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:59.762 06:01:06 -- common/autotest_common.sh@852 -- # return 0 00:05:59.762 06:01:06 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:00.020 { 00:06:00.020 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:06:00.020 "fields": { 00:06:00.020 "major": 24, 00:06:00.020 "minor": 1, 00:06:00.020 "patch": 1, 00:06:00.020 "suffix": "-pre", 00:06:00.020 "commit": "4b94202c6" 00:06:00.020 } 00:06:00.020 } 00:06:00.020 06:01:06 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:00.020 06:01:06 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:00.020 06:01:06 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:00.020 06:01:06 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:00.020 06:01:06 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:00.020 06:01:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:00.020 06:01:06 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:00.020 06:01:06 -- common/autotest_common.sh@10 -- # set +x 00:06:00.020 06:01:06 -- app/cmdline.sh@26 -- # sort 00:06:00.020 06:01:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:00.020 06:01:06 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:00.020 06:01:06 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:00.020 06:01:06 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:00.020 06:01:06 -- common/autotest_common.sh@640 -- # local es=0 00:06:00.020 06:01:06 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:00.020 06:01:06 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:00.020 06:01:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:00.020 06:01:06 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:00.020 06:01:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:00.020 06:01:06 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:00.020 06:01:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:00.020 06:01:06 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:00.020 06:01:06 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:00.020 06:01:06 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:00.279 request: 00:06:00.279 { 00:06:00.279 "method": "env_dpdk_get_mem_stats", 00:06:00.279 "req_id": 1 00:06:00.279 } 00:06:00.279 Got JSON-RPC error response 00:06:00.279 response: 00:06:00.279 { 00:06:00.279 "code": -32601, 00:06:00.279 "message": "Method not found" 00:06:00.279 } 00:06:00.279 06:01:06 -- common/autotest_common.sh@643 -- # es=1 00:06:00.279 06:01:06 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:00.279 06:01:06 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:00.279 06:01:06 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:00.279 06:01:06 -- app/cmdline.sh@1 -- # killprocess 1011426 00:06:00.279 06:01:06 -- common/autotest_common.sh@926 -- # '[' -z 1011426 ']' 00:06:00.279 06:01:06 -- common/autotest_common.sh@930 -- # kill -0 1011426 00:06:00.279 06:01:06 -- common/autotest_common.sh@931 -- # uname 00:06:00.279 06:01:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:00.279 06:01:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1011426 00:06:00.279 06:01:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:00.279 06:01:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:00.279 06:01:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1011426' 00:06:00.279 killing process with pid 1011426 00:06:00.279 06:01:06 -- common/autotest_common.sh@945 -- # kill 1011426 00:06:00.279 06:01:06 -- common/autotest_common.sh@950 -- # wait 1011426 00:06:00.846 00:06:00.846 real 0m2.138s 00:06:00.846 user 0m2.693s 00:06:00.846 sys 0m0.506s 00:06:00.846 06:01:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.846 06:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:00.846 ************************************ 00:06:00.846 END TEST app_cmdline 00:06:00.846 ************************************ 00:06:00.846 06:01:07 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:00.846 06:01:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:00.846 06:01:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.846 06:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:00.846 ************************************ 00:06:00.846 START TEST version 00:06:00.846 ************************************ 00:06:00.846 06:01:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:00.846 * Looking for test storage... 00:06:00.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:00.846 06:01:07 -- app/version.sh@17 -- # get_header_version major 00:06:00.846 06:01:07 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:00.846 06:01:07 -- app/version.sh@14 -- # cut -f2 00:06:00.846 06:01:07 -- app/version.sh@14 -- # tr -d '"' 00:06:00.846 06:01:07 -- app/version.sh@17 -- # major=24 00:06:00.846 06:01:07 -- app/version.sh@18 -- # get_header_version minor 00:06:00.846 06:01:07 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:00.846 06:01:07 -- app/version.sh@14 -- # cut -f2 00:06:00.846 06:01:07 -- app/version.sh@14 -- # tr -d '"' 00:06:00.846 06:01:07 -- app/version.sh@18 -- # minor=1 00:06:00.846 06:01:07 -- app/version.sh@19 -- # get_header_version patch 00:06:00.846 06:01:07 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:00.846 06:01:07 -- app/version.sh@14 -- # cut -f2 00:06:00.846 06:01:07 -- app/version.sh@14 -- # tr -d '"' 00:06:00.846 06:01:07 -- app/version.sh@19 -- # patch=1 00:06:00.846 06:01:07 -- app/version.sh@20 -- # get_header_version suffix 00:06:00.846 06:01:07 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:00.846 06:01:07 -- app/version.sh@14 -- # cut -f2 00:06:00.846 06:01:07 -- app/version.sh@14 -- # tr -d '"' 00:06:00.846 06:01:07 -- app/version.sh@20 -- # suffix=-pre 00:06:00.846 06:01:07 -- app/version.sh@22 -- # version=24.1 00:06:00.846 06:01:07 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:00.846 06:01:07 -- app/version.sh@25 -- # version=24.1.1 00:06:00.846 06:01:07 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:00.846 06:01:07 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:00.846 06:01:07 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:00.846 06:01:07 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:00.846 06:01:07 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:00.846 00:06:00.846 real 0m0.112s 00:06:00.846 user 0m0.064s 00:06:00.846 sys 0m0.070s 00:06:00.846 06:01:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.846 06:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:00.846 ************************************ 00:06:00.846 END TEST version 00:06:00.846 ************************************ 00:06:00.846 06:01:07 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:00.846 06:01:07 -- spdk/autotest.sh@204 -- # uname -s 00:06:00.846 06:01:07 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:00.846 06:01:07 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:00.846 06:01:07 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:00.846 06:01:07 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:06:00.846 06:01:07 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:06:00.846 06:01:07 -- spdk/autotest.sh@268 -- # timing_exit lib 00:06:00.846 06:01:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:00.846 06:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:00.846 06:01:07 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:00.846 06:01:07 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:06:00.846 06:01:07 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:06:00.846 06:01:07 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:06:00.846 06:01:07 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:06:00.846 06:01:07 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:06:00.846 06:01:07 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:00.846 06:01:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:00.846 06:01:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.846 06:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:00.846 ************************************ 00:06:00.846 START TEST nvmf_tcp 00:06:00.846 ************************************ 00:06:00.846 06:01:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:00.846 * Looking for test storage... 00:06:00.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:00.846 06:01:07 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:00.846 06:01:07 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:00.846 06:01:07 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:00.846 06:01:07 -- nvmf/common.sh@7 -- # uname -s 00:06:00.846 06:01:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:00.846 06:01:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:00.846 06:01:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:00.846 06:01:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:00.846 06:01:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:00.846 06:01:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:00.846 06:01:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:00.846 06:01:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:00.846 06:01:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:00.846 06:01:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:00.846 06:01:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:00.846 06:01:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:00.846 06:01:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:00.846 06:01:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:00.846 06:01:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:00.846 06:01:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:00.846 06:01:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.846 06:01:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.846 06:01:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.846 06:01:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.846 06:01:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.846 06:01:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.847 06:01:07 -- paths/export.sh@5 -- # export PATH 00:06:00.847 06:01:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.847 06:01:07 -- nvmf/common.sh@46 -- # : 0 00:06:00.847 06:01:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:00.847 06:01:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:00.847 06:01:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:00.847 06:01:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:00.847 06:01:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:00.847 06:01:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:00.847 06:01:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:00.847 06:01:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:00.847 06:01:07 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:00.847 06:01:07 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:00.847 06:01:07 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:00.847 06:01:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:00.847 06:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:00.847 06:01:07 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:00.847 06:01:07 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:00.847 06:01:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:00.847 06:01:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.847 06:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:00.847 ************************************ 00:06:00.847 START TEST nvmf_example 00:06:00.847 ************************************ 00:06:00.847 06:01:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:01.106 * Looking for test storage... 00:06:01.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:01.106 06:01:07 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.106 06:01:07 -- nvmf/common.sh@7 -- # uname -s 00:06:01.106 06:01:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.106 06:01:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.106 06:01:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.106 06:01:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.106 06:01:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.106 06:01:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.106 06:01:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.106 06:01:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.106 06:01:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.106 06:01:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.106 06:01:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:01.106 06:01:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:01.106 06:01:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.106 06:01:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.106 06:01:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:01.106 06:01:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.106 06:01:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.106 06:01:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.106 06:01:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.106 06:01:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.106 06:01:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.106 06:01:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.106 06:01:07 -- paths/export.sh@5 -- # export PATH 00:06:01.106 06:01:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.106 06:01:07 -- nvmf/common.sh@46 -- # : 0 00:06:01.106 06:01:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:01.106 06:01:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:01.106 06:01:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:01.106 06:01:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.106 06:01:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.106 06:01:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:01.106 06:01:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:01.106 06:01:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:01.106 06:01:07 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:01.106 06:01:07 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:01.106 06:01:07 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:01.106 06:01:07 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:01.106 06:01:07 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:01.106 06:01:07 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:01.106 06:01:07 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:01.106 06:01:07 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:01.106 06:01:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:01.106 06:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:01.106 06:01:07 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:01.106 06:01:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:01.106 06:01:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:01.106 06:01:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:01.106 06:01:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:01.106 06:01:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:01.106 06:01:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:01.106 06:01:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:01.106 06:01:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:01.106 06:01:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:01.106 06:01:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:01.106 06:01:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:01.106 06:01:07 -- common/autotest_common.sh@10 -- # set +x 00:06:03.005 06:01:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:03.005 06:01:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:03.005 06:01:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:03.005 06:01:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:03.005 06:01:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:03.005 06:01:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:03.005 06:01:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:03.005 06:01:09 -- nvmf/common.sh@294 -- # net_devs=() 00:06:03.005 06:01:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:03.005 06:01:09 -- nvmf/common.sh@295 -- # e810=() 00:06:03.005 06:01:09 -- nvmf/common.sh@295 -- # local -ga e810 00:06:03.005 06:01:09 -- nvmf/common.sh@296 -- # x722=() 00:06:03.005 06:01:09 -- nvmf/common.sh@296 -- # local -ga x722 00:06:03.005 06:01:09 -- nvmf/common.sh@297 -- # mlx=() 00:06:03.005 06:01:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:03.005 06:01:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:03.005 06:01:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:03.005 06:01:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:03.005 06:01:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:03.005 06:01:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:03.005 06:01:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:03.005 06:01:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:03.005 06:01:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:03.005 06:01:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:03.005 06:01:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:03.005 06:01:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:03.005 06:01:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:03.005 06:01:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:03.005 06:01:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:03.005 06:01:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:03.005 06:01:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:03.005 06:01:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:03.005 06:01:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:03.005 06:01:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:03.005 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:03.005 06:01:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:03.005 06:01:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:03.005 06:01:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.005 06:01:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.005 06:01:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:03.005 06:01:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:03.005 06:01:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:03.005 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:03.005 06:01:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:03.005 06:01:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:03.005 06:01:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:03.005 06:01:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:03.005 06:01:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:03.005 06:01:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:03.005 06:01:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:03.005 06:01:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:03.005 06:01:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:03.005 06:01:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.005 06:01:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:03.005 06:01:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.005 06:01:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:03.005 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:03.005 06:01:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.005 06:01:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:03.005 06:01:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:03.005 06:01:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:03.005 06:01:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:03.005 06:01:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:03.005 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:03.005 06:01:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:03.005 06:01:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:03.005 06:01:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:03.005 06:01:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:03.005 06:01:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:03.005 06:01:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:03.005 06:01:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:03.005 06:01:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:03.005 06:01:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:03.005 06:01:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:03.005 06:01:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:03.005 06:01:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:03.005 06:01:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:03.005 06:01:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:03.005 06:01:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:03.005 06:01:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:03.005 06:01:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:03.005 06:01:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:03.005 06:01:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:03.262 06:01:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:03.262 06:01:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:03.262 06:01:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:03.262 06:01:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:03.262 06:01:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:03.262 06:01:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:03.262 06:01:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:03.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:03.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:06:03.262 00:06:03.262 --- 10.0.0.2 ping statistics --- 00:06:03.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.262 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:06:03.262 06:01:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:03.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:03.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:06:03.262 00:06:03.262 --- 10.0.0.1 ping statistics --- 00:06:03.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:03.262 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:06:03.262 06:01:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:03.262 06:01:09 -- nvmf/common.sh@410 -- # return 0 00:06:03.262 06:01:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:03.262 06:01:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:03.262 06:01:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:03.262 06:01:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:03.262 06:01:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:03.262 06:01:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:03.262 06:01:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:03.262 06:01:09 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:03.262 06:01:09 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:03.262 06:01:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:03.262 06:01:09 -- common/autotest_common.sh@10 -- # set +x 00:06:03.262 06:01:09 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:03.262 06:01:09 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:03.262 06:01:09 -- target/nvmf_example.sh@34 -- # nvmfpid=1013460 00:06:03.262 06:01:09 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:03.262 06:01:09 -- target/nvmf_example.sh@36 -- # waitforlisten 1013460 00:06:03.262 06:01:09 -- common/autotest_common.sh@819 -- # '[' -z 1013460 ']' 00:06:03.262 06:01:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.262 06:01:09 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:03.262 06:01:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:03.262 06:01:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.262 06:01:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:03.262 06:01:09 -- common/autotest_common.sh@10 -- # set +x 00:06:03.262 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.221 06:01:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:04.221 06:01:10 -- common/autotest_common.sh@852 -- # return 0 00:06:04.221 06:01:10 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:04.221 06:01:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:04.221 06:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:04.221 06:01:10 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:04.221 06:01:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.221 06:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:04.221 06:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.221 06:01:10 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:04.221 06:01:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.221 06:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:04.221 06:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.221 06:01:10 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:04.221 06:01:10 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:04.221 06:01:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.221 06:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:04.478 06:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.478 06:01:10 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:04.478 06:01:10 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:04.478 06:01:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.478 06:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:04.478 06:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.478 06:01:10 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:04.478 06:01:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.478 06:01:10 -- common/autotest_common.sh@10 -- # set +x 00:06:04.478 06:01:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.478 06:01:10 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:04.478 06:01:10 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:04.478 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.691 Initializing NVMe Controllers 00:06:16.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:16.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:16.691 Initialization complete. Launching workers. 00:06:16.691 ======================================================== 00:06:16.691 Latency(us) 00:06:16.691 Device Information : IOPS MiB/s Average min max 00:06:16.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15173.79 59.27 4217.29 832.32 16207.90 00:06:16.691 ======================================================== 00:06:16.691 Total : 15173.79 59.27 4217.29 832.32 16207.90 00:06:16.691 00:06:16.691 06:01:21 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:16.691 06:01:21 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:16.691 06:01:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:16.691 06:01:21 -- nvmf/common.sh@116 -- # sync 00:06:16.691 06:01:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:16.691 06:01:21 -- nvmf/common.sh@119 -- # set +e 00:06:16.691 06:01:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:16.691 06:01:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:16.691 rmmod nvme_tcp 00:06:16.691 rmmod nvme_fabrics 00:06:16.691 rmmod nvme_keyring 00:06:16.691 06:01:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:16.691 06:01:21 -- nvmf/common.sh@123 -- # set -e 00:06:16.691 06:01:21 -- nvmf/common.sh@124 -- # return 0 00:06:16.691 06:01:21 -- nvmf/common.sh@477 -- # '[' -n 1013460 ']' 00:06:16.691 06:01:21 -- nvmf/common.sh@478 -- # killprocess 1013460 00:06:16.691 06:01:21 -- common/autotest_common.sh@926 -- # '[' -z 1013460 ']' 00:06:16.691 06:01:21 -- common/autotest_common.sh@930 -- # kill -0 1013460 00:06:16.691 06:01:21 -- common/autotest_common.sh@931 -- # uname 00:06:16.691 06:01:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:16.691 06:01:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1013460 00:06:16.691 06:01:21 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:06:16.691 06:01:21 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:06:16.691 06:01:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1013460' 00:06:16.691 killing process with pid 1013460 00:06:16.691 06:01:21 -- common/autotest_common.sh@945 -- # kill 1013460 00:06:16.691 06:01:21 -- common/autotest_common.sh@950 -- # wait 1013460 00:06:16.691 nvmf threads initialize successfully 00:06:16.691 bdev subsystem init successfully 00:06:16.691 created a nvmf target service 00:06:16.691 create targets's poll groups done 00:06:16.691 all subsystems of target started 00:06:16.691 nvmf target is running 00:06:16.691 all subsystems of target stopped 00:06:16.691 destroy targets's poll groups done 00:06:16.691 destroyed the nvmf target service 00:06:16.691 bdev subsystem finish successfully 00:06:16.691 nvmf threads destroy successfully 00:06:16.691 06:01:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:16.691 06:01:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:16.691 06:01:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:16.691 06:01:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:16.691 06:01:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:16.691 06:01:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.691 06:01:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:16.691 06:01:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.949 06:01:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:06:16.949 06:01:23 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:16.949 06:01:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:16.949 06:01:23 -- common/autotest_common.sh@10 -- # set +x 00:06:16.949 00:06:16.949 real 0m16.082s 00:06:16.949 user 0m45.592s 00:06:16.949 sys 0m3.263s 00:06:16.949 06:01:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.949 06:01:23 -- common/autotest_common.sh@10 -- # set +x 00:06:16.949 ************************************ 00:06:16.949 END TEST nvmf_example 00:06:16.949 ************************************ 00:06:16.949 06:01:23 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:16.949 06:01:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:16.949 06:01:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.949 06:01:23 -- common/autotest_common.sh@10 -- # set +x 00:06:16.949 ************************************ 00:06:16.949 START TEST nvmf_filesystem 00:06:16.949 ************************************ 00:06:16.949 06:01:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:17.209 * Looking for test storage... 00:06:17.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.209 06:01:23 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:17.209 06:01:23 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:17.209 06:01:23 -- common/autotest_common.sh@34 -- # set -e 00:06:17.209 06:01:23 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:17.209 06:01:23 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:17.209 06:01:23 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:17.209 06:01:23 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:17.209 06:01:23 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:17.209 06:01:23 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:17.209 06:01:23 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:17.209 06:01:23 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:17.209 06:01:23 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:17.209 06:01:23 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:17.209 06:01:23 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:17.209 06:01:23 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:17.209 06:01:23 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:17.209 06:01:23 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:17.209 06:01:23 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:17.209 06:01:23 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:17.209 06:01:23 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:17.209 06:01:23 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:17.209 06:01:23 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:17.209 06:01:23 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:17.209 06:01:23 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:17.209 06:01:23 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:17.209 06:01:23 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:17.209 06:01:23 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:17.209 06:01:23 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:17.209 06:01:23 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:17.209 06:01:23 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:17.209 06:01:23 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:17.209 06:01:23 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:17.209 06:01:23 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:17.209 06:01:23 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:17.209 06:01:23 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:17.209 06:01:23 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:17.209 06:01:23 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:17.209 06:01:23 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:17.209 06:01:23 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:17.209 06:01:23 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:17.209 06:01:23 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:17.209 06:01:23 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:17.209 06:01:23 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:17.209 06:01:23 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:17.209 06:01:23 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:17.209 06:01:23 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:17.209 06:01:23 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:17.209 06:01:23 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:17.209 06:01:23 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:17.209 06:01:23 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:17.209 06:01:23 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:17.209 06:01:23 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:17.209 06:01:23 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:17.209 06:01:23 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:17.209 06:01:23 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:17.209 06:01:23 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:17.209 06:01:23 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:17.209 06:01:23 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:06:17.209 06:01:23 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:17.209 06:01:23 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:06:17.209 06:01:23 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:06:17.209 06:01:23 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:06:17.209 06:01:23 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:06:17.209 06:01:23 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:06:17.209 06:01:23 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:06:17.209 06:01:23 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:06:17.209 06:01:23 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:06:17.209 06:01:23 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:06:17.209 06:01:23 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:06:17.209 06:01:23 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:06:17.209 06:01:23 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:06:17.209 06:01:23 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:06:17.209 06:01:23 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:17.209 06:01:23 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:06:17.209 06:01:23 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:06:17.209 06:01:23 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:06:17.209 06:01:23 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:06:17.209 06:01:23 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:06:17.209 06:01:23 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:06:17.209 06:01:23 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:06:17.209 06:01:23 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:06:17.209 06:01:23 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:06:17.209 06:01:23 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:06:17.209 06:01:23 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:17.209 06:01:23 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:06:17.209 06:01:23 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:06:17.209 06:01:23 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:17.209 06:01:23 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:17.209 06:01:23 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:17.209 06:01:23 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:17.210 06:01:23 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:17.210 06:01:23 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:17.210 06:01:23 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:17.210 06:01:23 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:17.210 06:01:23 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:17.210 06:01:23 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:17.210 06:01:23 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:17.210 06:01:23 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:17.210 06:01:23 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:17.210 06:01:23 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:17.210 06:01:23 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:17.210 06:01:23 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:17.210 #define SPDK_CONFIG_H 00:06:17.210 #define SPDK_CONFIG_APPS 1 00:06:17.210 #define SPDK_CONFIG_ARCH native 00:06:17.210 #undef SPDK_CONFIG_ASAN 00:06:17.210 #undef SPDK_CONFIG_AVAHI 00:06:17.210 #undef SPDK_CONFIG_CET 00:06:17.210 #define SPDK_CONFIG_COVERAGE 1 00:06:17.210 #define SPDK_CONFIG_CROSS_PREFIX 00:06:17.210 #undef SPDK_CONFIG_CRYPTO 00:06:17.210 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:17.210 #undef SPDK_CONFIG_CUSTOMOCF 00:06:17.210 #undef SPDK_CONFIG_DAOS 00:06:17.210 #define SPDK_CONFIG_DAOS_DIR 00:06:17.210 #define SPDK_CONFIG_DEBUG 1 00:06:17.210 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:17.210 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:17.210 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:17.210 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:17.210 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:17.210 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:17.210 #define SPDK_CONFIG_EXAMPLES 1 00:06:17.210 #undef SPDK_CONFIG_FC 00:06:17.210 #define SPDK_CONFIG_FC_PATH 00:06:17.210 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:17.210 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:17.210 #undef SPDK_CONFIG_FUSE 00:06:17.210 #undef SPDK_CONFIG_FUZZER 00:06:17.210 #define SPDK_CONFIG_FUZZER_LIB 00:06:17.210 #undef SPDK_CONFIG_GOLANG 00:06:17.210 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:17.210 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:17.210 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:17.210 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:17.210 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:17.210 #define SPDK_CONFIG_IDXD 1 00:06:17.210 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:17.210 #undef SPDK_CONFIG_IPSEC_MB 00:06:17.210 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:17.210 #define SPDK_CONFIG_ISAL 1 00:06:17.210 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:17.210 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:17.210 #define SPDK_CONFIG_LIBDIR 00:06:17.210 #undef SPDK_CONFIG_LTO 00:06:17.210 #define SPDK_CONFIG_MAX_LCORES 00:06:17.210 #define SPDK_CONFIG_NVME_CUSE 1 00:06:17.210 #undef SPDK_CONFIG_OCF 00:06:17.210 #define SPDK_CONFIG_OCF_PATH 00:06:17.210 #define SPDK_CONFIG_OPENSSL_PATH 00:06:17.210 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:17.210 #undef SPDK_CONFIG_PGO_USE 00:06:17.210 #define SPDK_CONFIG_PREFIX /usr/local 00:06:17.210 #undef SPDK_CONFIG_RAID5F 00:06:17.210 #undef SPDK_CONFIG_RBD 00:06:17.210 #define SPDK_CONFIG_RDMA 1 00:06:17.210 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:17.210 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:17.210 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:17.210 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:17.210 #define SPDK_CONFIG_SHARED 1 00:06:17.210 #undef SPDK_CONFIG_SMA 00:06:17.210 #define SPDK_CONFIG_TESTS 1 00:06:17.210 #undef SPDK_CONFIG_TSAN 00:06:17.210 #define SPDK_CONFIG_UBLK 1 00:06:17.210 #define SPDK_CONFIG_UBSAN 1 00:06:17.210 #undef SPDK_CONFIG_UNIT_TESTS 00:06:17.210 #undef SPDK_CONFIG_URING 00:06:17.210 #define SPDK_CONFIG_URING_PATH 00:06:17.210 #undef SPDK_CONFIG_URING_ZNS 00:06:17.210 #undef SPDK_CONFIG_USDT 00:06:17.210 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:17.210 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:17.210 #undef SPDK_CONFIG_VFIO_USER 00:06:17.210 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:17.210 #define SPDK_CONFIG_VHOST 1 00:06:17.210 #define SPDK_CONFIG_VIRTIO 1 00:06:17.210 #undef SPDK_CONFIG_VTUNE 00:06:17.210 #define SPDK_CONFIG_VTUNE_DIR 00:06:17.210 #define SPDK_CONFIG_WERROR 1 00:06:17.210 #define SPDK_CONFIG_WPDK_DIR 00:06:17.210 #undef SPDK_CONFIG_XNVME 00:06:17.210 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:17.210 06:01:23 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:17.210 06:01:23 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:17.210 06:01:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.210 06:01:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.210 06:01:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.210 06:01:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.210 06:01:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.210 06:01:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.210 06:01:23 -- paths/export.sh@5 -- # export PATH 00:06:17.210 06:01:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.210 06:01:23 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:17.210 06:01:23 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:17.210 06:01:23 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:17.210 06:01:23 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:17.210 06:01:23 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:17.210 06:01:23 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:17.210 06:01:23 -- pm/common@16 -- # TEST_TAG=N/A 00:06:17.210 06:01:23 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:17.210 06:01:23 -- common/autotest_common.sh@52 -- # : 1 00:06:17.210 06:01:23 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:06:17.210 06:01:23 -- common/autotest_common.sh@56 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:17.210 06:01:23 -- common/autotest_common.sh@58 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:06:17.210 06:01:23 -- common/autotest_common.sh@60 -- # : 1 00:06:17.210 06:01:23 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:17.210 06:01:23 -- common/autotest_common.sh@62 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:06:17.210 06:01:23 -- common/autotest_common.sh@64 -- # : 00:06:17.210 06:01:23 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:06:17.210 06:01:23 -- common/autotest_common.sh@66 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:06:17.210 06:01:23 -- common/autotest_common.sh@68 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:06:17.210 06:01:23 -- common/autotest_common.sh@70 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:06:17.210 06:01:23 -- common/autotest_common.sh@72 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:17.210 06:01:23 -- common/autotest_common.sh@74 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:06:17.210 06:01:23 -- common/autotest_common.sh@76 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:06:17.210 06:01:23 -- common/autotest_common.sh@78 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:06:17.210 06:01:23 -- common/autotest_common.sh@80 -- # : 1 00:06:17.210 06:01:23 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:06:17.210 06:01:23 -- common/autotest_common.sh@82 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:06:17.210 06:01:23 -- common/autotest_common.sh@84 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:06:17.210 06:01:23 -- common/autotest_common.sh@86 -- # : 1 00:06:17.210 06:01:23 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:06:17.210 06:01:23 -- common/autotest_common.sh@88 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:06:17.210 06:01:23 -- common/autotest_common.sh@90 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:17.210 06:01:23 -- common/autotest_common.sh@92 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:06:17.210 06:01:23 -- common/autotest_common.sh@94 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:06:17.210 06:01:23 -- common/autotest_common.sh@96 -- # : tcp 00:06:17.210 06:01:23 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:17.210 06:01:23 -- common/autotest_common.sh@98 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:06:17.210 06:01:23 -- common/autotest_common.sh@100 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:06:17.210 06:01:23 -- common/autotest_common.sh@102 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:06:17.210 06:01:23 -- common/autotest_common.sh@104 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:06:17.210 06:01:23 -- common/autotest_common.sh@106 -- # : 0 00:06:17.210 06:01:23 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:06:17.210 06:01:23 -- common/autotest_common.sh@108 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:06:17.211 06:01:23 -- common/autotest_common.sh@110 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:06:17.211 06:01:23 -- common/autotest_common.sh@112 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:17.211 06:01:23 -- common/autotest_common.sh@114 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:06:17.211 06:01:23 -- common/autotest_common.sh@116 -- # : 1 00:06:17.211 06:01:23 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:06:17.211 06:01:23 -- common/autotest_common.sh@118 -- # : 00:06:17.211 06:01:23 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:17.211 06:01:23 -- common/autotest_common.sh@120 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:06:17.211 06:01:23 -- common/autotest_common.sh@122 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:06:17.211 06:01:23 -- common/autotest_common.sh@124 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:06:17.211 06:01:23 -- common/autotest_common.sh@126 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:06:17.211 06:01:23 -- common/autotest_common.sh@128 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:06:17.211 06:01:23 -- common/autotest_common.sh@130 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:06:17.211 06:01:23 -- common/autotest_common.sh@132 -- # : 00:06:17.211 06:01:23 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:06:17.211 06:01:23 -- common/autotest_common.sh@134 -- # : true 00:06:17.211 06:01:23 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:06:17.211 06:01:23 -- common/autotest_common.sh@136 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:06:17.211 06:01:23 -- common/autotest_common.sh@138 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:06:17.211 06:01:23 -- common/autotest_common.sh@140 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:06:17.211 06:01:23 -- common/autotest_common.sh@142 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:06:17.211 06:01:23 -- common/autotest_common.sh@144 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:06:17.211 06:01:23 -- common/autotest_common.sh@146 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:06:17.211 06:01:23 -- common/autotest_common.sh@148 -- # : e810 00:06:17.211 06:01:23 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:06:17.211 06:01:23 -- common/autotest_common.sh@150 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:06:17.211 06:01:23 -- common/autotest_common.sh@152 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:06:17.211 06:01:23 -- common/autotest_common.sh@154 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:06:17.211 06:01:23 -- common/autotest_common.sh@156 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:06:17.211 06:01:23 -- common/autotest_common.sh@158 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:06:17.211 06:01:23 -- common/autotest_common.sh@160 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:06:17.211 06:01:23 -- common/autotest_common.sh@163 -- # : 00:06:17.211 06:01:23 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:06:17.211 06:01:23 -- common/autotest_common.sh@165 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:06:17.211 06:01:23 -- common/autotest_common.sh@167 -- # : 0 00:06:17.211 06:01:23 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:17.211 06:01:23 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:17.211 06:01:23 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:17.211 06:01:23 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:17.211 06:01:23 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:17.211 06:01:23 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:17.211 06:01:23 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:17.211 06:01:23 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:17.211 06:01:23 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:17.211 06:01:23 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:17.211 06:01:23 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:17.211 06:01:23 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:17.211 06:01:23 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:17.211 06:01:23 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:17.211 06:01:23 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:06:17.211 06:01:23 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:17.211 06:01:23 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:17.211 06:01:23 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:17.211 06:01:23 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:17.211 06:01:23 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:17.211 06:01:23 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:06:17.211 06:01:23 -- common/autotest_common.sh@196 -- # cat 00:06:17.211 06:01:23 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:06:17.211 06:01:23 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:17.211 06:01:23 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:17.211 06:01:23 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:17.211 06:01:23 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:17.211 06:01:23 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:06:17.211 06:01:23 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:06:17.211 06:01:23 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:17.211 06:01:23 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:17.211 06:01:23 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:17.211 06:01:23 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:17.211 06:01:23 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:17.211 06:01:23 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:17.211 06:01:23 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:17.211 06:01:23 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:17.211 06:01:23 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:17.211 06:01:23 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:17.211 06:01:23 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:17.211 06:01:23 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:17.211 06:01:23 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:06:17.211 06:01:23 -- common/autotest_common.sh@249 -- # export valgrind= 00:06:17.211 06:01:23 -- common/autotest_common.sh@249 -- # valgrind= 00:06:17.211 06:01:23 -- common/autotest_common.sh@255 -- # uname -s 00:06:17.211 06:01:23 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:06:17.211 06:01:23 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:06:17.211 06:01:23 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:06:17.211 06:01:23 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:06:17.211 06:01:23 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:17.211 06:01:23 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:17.211 06:01:23 -- common/autotest_common.sh@265 -- # MAKE=make 00:06:17.211 06:01:23 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j48 00:06:17.211 06:01:23 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:06:17.211 06:01:23 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:06:17.211 06:01:23 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:17.211 06:01:23 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:06:17.211 06:01:23 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:06:17.211 06:01:23 -- common/autotest_common.sh@291 -- # for i in "$@" 00:06:17.211 06:01:23 -- common/autotest_common.sh@292 -- # case "$i" in 00:06:17.211 06:01:23 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:06:17.211 06:01:23 -- common/autotest_common.sh@309 -- # [[ -z 1015340 ]] 00:06:17.211 06:01:23 -- common/autotest_common.sh@309 -- # kill -0 1015340 00:06:17.211 06:01:23 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:06:17.211 06:01:23 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:06:17.211 06:01:23 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:06:17.212 06:01:23 -- common/autotest_common.sh@322 -- # local mount target_dir 00:06:17.212 06:01:23 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:06:17.212 06:01:23 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:06:17.212 06:01:23 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:06:17.212 06:01:23 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:06:17.212 06:01:23 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.3ixqQQ 00:06:17.212 06:01:23 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:17.212 06:01:23 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:06:17.212 06:01:23 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:06:17.212 06:01:23 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.3ixqQQ/tests/target /tmp/spdk.3ixqQQ 00:06:17.212 06:01:23 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:06:17.212 06:01:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:17.212 06:01:23 -- common/autotest_common.sh@318 -- # df -T 00:06:17.212 06:01:23 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:06:17.212 06:01:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:06:17.212 06:01:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:06:17.212 06:01:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:06:17.212 06:01:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:06:17.212 06:01:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:06:17.212 06:01:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:17.212 06:01:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:06:17.212 06:01:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:06:17.212 06:01:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=953643008 00:06:17.212 06:01:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:06:17.212 06:01:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=4330786816 00:06:17.212 06:01:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:17.212 06:01:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:06:17.212 06:01:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:06:17.212 06:01:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=55630020608 00:06:17.212 06:01:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=61994708992 00:06:17.212 06:01:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=6364688384 00:06:17.212 06:01:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:17.212 06:01:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:17.212 06:01:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:17.212 06:01:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=30943834112 00:06:17.212 06:01:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997352448 00:06:17.212 06:01:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:06:17.212 06:01:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:17.212 06:01:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:17.212 06:01:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:17.212 06:01:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=12390182912 00:06:17.212 06:01:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12398944256 00:06:17.212 06:01:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=8761344 00:06:17.212 06:01:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:17.212 06:01:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:17.212 06:01:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:17.212 06:01:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=30996516864 00:06:17.212 06:01:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997356544 00:06:17.212 06:01:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=839680 00:06:17.212 06:01:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:17.212 06:01:23 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:17.212 06:01:23 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:17.212 06:01:23 -- common/autotest_common.sh@353 -- # avails["$mount"]=6199463936 00:06:17.212 06:01:23 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6199468032 00:06:17.212 06:01:23 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:06:17.212 06:01:23 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:17.212 06:01:23 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:06:17.212 * Looking for test storage... 00:06:17.212 06:01:23 -- common/autotest_common.sh@359 -- # local target_space new_size 00:06:17.212 06:01:23 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:06:17.212 06:01:23 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.212 06:01:23 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:17.212 06:01:23 -- common/autotest_common.sh@363 -- # mount=/ 00:06:17.212 06:01:23 -- common/autotest_common.sh@365 -- # target_space=55630020608 00:06:17.212 06:01:23 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:06:17.212 06:01:23 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:06:17.212 06:01:23 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:06:17.212 06:01:23 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:06:17.212 06:01:23 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:06:17.212 06:01:23 -- common/autotest_common.sh@372 -- # new_size=8579280896 00:06:17.212 06:01:23 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:17.212 06:01:23 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.212 06:01:23 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.212 06:01:23 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:17.212 06:01:23 -- common/autotest_common.sh@380 -- # return 0 00:06:17.212 06:01:23 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:06:17.212 06:01:23 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:06:17.212 06:01:23 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:17.212 06:01:23 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:17.212 06:01:23 -- common/autotest_common.sh@1672 -- # true 00:06:17.212 06:01:23 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:06:17.212 06:01:23 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:17.212 06:01:23 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:17.212 06:01:23 -- common/autotest_common.sh@27 -- # exec 00:06:17.212 06:01:23 -- common/autotest_common.sh@29 -- # exec 00:06:17.212 06:01:23 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:17.212 06:01:23 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:17.212 06:01:23 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:17.212 06:01:23 -- common/autotest_common.sh@18 -- # set -x 00:06:17.212 06:01:23 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:17.212 06:01:23 -- nvmf/common.sh@7 -- # uname -s 00:06:17.212 06:01:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.212 06:01:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.212 06:01:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.212 06:01:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.212 06:01:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.212 06:01:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.212 06:01:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.212 06:01:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.212 06:01:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.212 06:01:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.212 06:01:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:17.212 06:01:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:17.212 06:01:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.212 06:01:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.212 06:01:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:17.212 06:01:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:17.212 06:01:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.212 06:01:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.212 06:01:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.212 06:01:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.212 06:01:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.212 06:01:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.212 06:01:23 -- paths/export.sh@5 -- # export PATH 00:06:17.212 06:01:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.212 06:01:23 -- nvmf/common.sh@46 -- # : 0 00:06:17.212 06:01:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:17.212 06:01:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:17.212 06:01:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:17.212 06:01:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.212 06:01:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.212 06:01:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:17.212 06:01:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:17.212 06:01:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:17.212 06:01:23 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:17.212 06:01:23 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:17.212 06:01:23 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:17.212 06:01:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:17.213 06:01:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:17.213 06:01:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:17.213 06:01:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:17.213 06:01:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:17.213 06:01:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.213 06:01:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:17.213 06:01:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:17.213 06:01:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:17.213 06:01:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:17.213 06:01:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:17.213 06:01:23 -- common/autotest_common.sh@10 -- # set +x 00:06:19.112 06:01:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:19.112 06:01:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:19.112 06:01:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:19.112 06:01:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:19.112 06:01:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:19.112 06:01:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:19.112 06:01:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:19.112 06:01:25 -- nvmf/common.sh@294 -- # net_devs=() 00:06:19.112 06:01:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:19.112 06:01:25 -- nvmf/common.sh@295 -- # e810=() 00:06:19.112 06:01:25 -- nvmf/common.sh@295 -- # local -ga e810 00:06:19.112 06:01:25 -- nvmf/common.sh@296 -- # x722=() 00:06:19.112 06:01:25 -- nvmf/common.sh@296 -- # local -ga x722 00:06:19.112 06:01:25 -- nvmf/common.sh@297 -- # mlx=() 00:06:19.112 06:01:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:19.112 06:01:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:19.112 06:01:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:19.112 06:01:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:19.112 06:01:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:19.112 06:01:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:19.112 06:01:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:19.112 06:01:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:19.112 06:01:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:19.112 06:01:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:19.112 06:01:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:19.112 06:01:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:19.112 06:01:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:19.112 06:01:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:19.112 06:01:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:19.112 06:01:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:19.112 06:01:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:19.112 06:01:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:19.112 06:01:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:19.112 06:01:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:19.112 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:19.112 06:01:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:19.112 06:01:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:19.112 06:01:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.112 06:01:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.112 06:01:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:19.112 06:01:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:19.112 06:01:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:19.112 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:19.112 06:01:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:19.112 06:01:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:19.112 06:01:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.112 06:01:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.112 06:01:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:19.112 06:01:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:19.112 06:01:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:19.112 06:01:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:19.112 06:01:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:19.112 06:01:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.112 06:01:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:19.112 06:01:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.112 06:01:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:19.112 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:19.112 06:01:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.112 06:01:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:19.112 06:01:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.112 06:01:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:19.112 06:01:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.112 06:01:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:19.112 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:19.112 06:01:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.112 06:01:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:19.112 06:01:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:19.112 06:01:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:19.112 06:01:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:19.112 06:01:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:19.112 06:01:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:19.112 06:01:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:19.113 06:01:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:19.113 06:01:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:19.113 06:01:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:19.113 06:01:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:19.113 06:01:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:19.113 06:01:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:19.113 06:01:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:19.113 06:01:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:19.113 06:01:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:19.113 06:01:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:19.113 06:01:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:19.371 06:01:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:19.371 06:01:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:19.371 06:01:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:19.371 06:01:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:19.371 06:01:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:19.371 06:01:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:19.371 06:01:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:19.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:19.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:06:19.371 00:06:19.371 --- 10.0.0.2 ping statistics --- 00:06:19.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.371 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:06:19.371 06:01:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:19.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:19.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:06:19.371 00:06:19.371 --- 10.0.0.1 ping statistics --- 00:06:19.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.371 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:06:19.371 06:01:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:19.371 06:01:25 -- nvmf/common.sh@410 -- # return 0 00:06:19.371 06:01:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:19.371 06:01:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:19.371 06:01:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:19.371 06:01:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:19.371 06:01:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:19.371 06:01:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:19.371 06:01:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:19.371 06:01:25 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:19.371 06:01:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:19.371 06:01:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.371 06:01:25 -- common/autotest_common.sh@10 -- # set +x 00:06:19.371 ************************************ 00:06:19.371 START TEST nvmf_filesystem_no_in_capsule 00:06:19.371 ************************************ 00:06:19.371 06:01:25 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:06:19.371 06:01:25 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:19.371 06:01:25 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:19.371 06:01:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:19.371 06:01:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:19.371 06:01:25 -- common/autotest_common.sh@10 -- # set +x 00:06:19.371 06:01:25 -- nvmf/common.sh@469 -- # nvmfpid=1016965 00:06:19.371 06:01:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:19.371 06:01:25 -- nvmf/common.sh@470 -- # waitforlisten 1016965 00:06:19.371 06:01:25 -- common/autotest_common.sh@819 -- # '[' -z 1016965 ']' 00:06:19.371 06:01:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.371 06:01:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:19.371 06:01:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.371 06:01:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:19.371 06:01:25 -- common/autotest_common.sh@10 -- # set +x 00:06:19.371 [2024-07-13 06:01:25.786085] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:19.371 [2024-07-13 06:01:25.786155] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.371 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.371 [2024-07-13 06:01:25.856761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.628 [2024-07-13 06:01:25.980816] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:19.628 [2024-07-13 06:01:25.981017] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:19.628 [2024-07-13 06:01:25.981038] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:19.628 [2024-07-13 06:01:25.981053] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:19.628 [2024-07-13 06:01:25.981110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.628 [2024-07-13 06:01:25.981142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.628 [2024-07-13 06:01:25.981204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.628 [2024-07-13 06:01:25.981207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.559 06:01:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:20.559 06:01:26 -- common/autotest_common.sh@852 -- # return 0 00:06:20.559 06:01:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:20.559 06:01:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:20.559 06:01:26 -- common/autotest_common.sh@10 -- # set +x 00:06:20.559 06:01:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:20.559 06:01:26 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:20.559 06:01:26 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:20.559 06:01:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.559 06:01:26 -- common/autotest_common.sh@10 -- # set +x 00:06:20.559 [2024-07-13 06:01:26.842602] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.559 06:01:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.559 06:01:26 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:20.559 06:01:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.559 06:01:26 -- common/autotest_common.sh@10 -- # set +x 00:06:20.559 Malloc1 00:06:20.559 06:01:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.559 06:01:26 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:20.559 06:01:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.559 06:01:26 -- common/autotest_common.sh@10 -- # set +x 00:06:20.559 06:01:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.559 06:01:27 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:20.559 06:01:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.559 06:01:27 -- common/autotest_common.sh@10 -- # set +x 00:06:20.559 06:01:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.559 06:01:27 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:20.559 06:01:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.559 06:01:27 -- common/autotest_common.sh@10 -- # set +x 00:06:20.559 [2024-07-13 06:01:27.016372] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:20.559 06:01:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.559 06:01:27 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:20.559 06:01:27 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:06:20.559 06:01:27 -- common/autotest_common.sh@1358 -- # local bdev_info 00:06:20.559 06:01:27 -- common/autotest_common.sh@1359 -- # local bs 00:06:20.559 06:01:27 -- common/autotest_common.sh@1360 -- # local nb 00:06:20.559 06:01:27 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:20.559 06:01:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:20.559 06:01:27 -- common/autotest_common.sh@10 -- # set +x 00:06:20.559 06:01:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:20.559 06:01:27 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:06:20.559 { 00:06:20.559 "name": "Malloc1", 00:06:20.559 "aliases": [ 00:06:20.559 "bf0ffdf6-1948-4c4d-8678-8aeff01646d2" 00:06:20.559 ], 00:06:20.559 "product_name": "Malloc disk", 00:06:20.559 "block_size": 512, 00:06:20.559 "num_blocks": 1048576, 00:06:20.559 "uuid": "bf0ffdf6-1948-4c4d-8678-8aeff01646d2", 00:06:20.559 "assigned_rate_limits": { 00:06:20.559 "rw_ios_per_sec": 0, 00:06:20.559 "rw_mbytes_per_sec": 0, 00:06:20.559 "r_mbytes_per_sec": 0, 00:06:20.559 "w_mbytes_per_sec": 0 00:06:20.559 }, 00:06:20.559 "claimed": true, 00:06:20.559 "claim_type": "exclusive_write", 00:06:20.559 "zoned": false, 00:06:20.559 "supported_io_types": { 00:06:20.559 "read": true, 00:06:20.559 "write": true, 00:06:20.559 "unmap": true, 00:06:20.559 "write_zeroes": true, 00:06:20.559 "flush": true, 00:06:20.559 "reset": true, 00:06:20.559 "compare": false, 00:06:20.559 "compare_and_write": false, 00:06:20.559 "abort": true, 00:06:20.559 "nvme_admin": false, 00:06:20.559 "nvme_io": false 00:06:20.559 }, 00:06:20.559 "memory_domains": [ 00:06:20.559 { 00:06:20.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.559 "dma_device_type": 2 00:06:20.559 } 00:06:20.559 ], 00:06:20.559 "driver_specific": {} 00:06:20.559 } 00:06:20.559 ]' 00:06:20.559 06:01:27 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:06:20.816 06:01:27 -- common/autotest_common.sh@1362 -- # bs=512 00:06:20.816 06:01:27 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:06:20.816 06:01:27 -- common/autotest_common.sh@1363 -- # nb=1048576 00:06:20.816 06:01:27 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:06:20.816 06:01:27 -- common/autotest_common.sh@1367 -- # echo 512 00:06:20.816 06:01:27 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:20.816 06:01:27 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:21.381 06:01:27 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:21.381 06:01:27 -- common/autotest_common.sh@1177 -- # local i=0 00:06:21.381 06:01:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:06:21.381 06:01:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:06:21.381 06:01:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:06:23.274 06:01:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:06:23.274 06:01:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:06:23.274 06:01:29 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:06:23.274 06:01:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:06:23.274 06:01:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:06:23.274 06:01:29 -- common/autotest_common.sh@1187 -- # return 0 00:06:23.274 06:01:29 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:23.274 06:01:29 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:23.274 06:01:29 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:23.274 06:01:29 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:23.274 06:01:29 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:23.274 06:01:29 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:23.274 06:01:29 -- setup/common.sh@80 -- # echo 536870912 00:06:23.274 06:01:29 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:23.274 06:01:29 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:23.274 06:01:29 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:23.274 06:01:29 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:23.532 06:01:29 -- target/filesystem.sh@69 -- # partprobe 00:06:24.096 06:01:30 -- target/filesystem.sh@70 -- # sleep 1 00:06:25.466 06:01:31 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:25.466 06:01:31 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:25.466 06:01:31 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:25.466 06:01:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:25.466 06:01:31 -- common/autotest_common.sh@10 -- # set +x 00:06:25.466 ************************************ 00:06:25.466 START TEST filesystem_ext4 00:06:25.466 ************************************ 00:06:25.466 06:01:31 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:25.466 06:01:31 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:25.466 06:01:31 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:25.466 06:01:31 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:25.466 06:01:31 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:06:25.466 06:01:31 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:25.466 06:01:31 -- common/autotest_common.sh@904 -- # local i=0 00:06:25.466 06:01:31 -- common/autotest_common.sh@905 -- # local force 00:06:25.466 06:01:31 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:06:25.466 06:01:31 -- common/autotest_common.sh@908 -- # force=-F 00:06:25.466 06:01:31 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:25.466 mke2fs 1.46.5 (30-Dec-2021) 00:06:25.466 Discarding device blocks: 0/522240 done 00:06:25.466 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:25.466 Filesystem UUID: 8466260c-aa69-48f3-8899-ef358998f216 00:06:25.466 Superblock backups stored on blocks: 00:06:25.466 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:25.466 00:06:25.466 Allocating group tables: 0/64 done 00:06:25.466 Writing inode tables: 0/64 done 00:06:28.012 Creating journal (8192 blocks): done 00:06:29.089 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:06:29.089 00:06:29.089 06:01:35 -- common/autotest_common.sh@921 -- # return 0 00:06:29.089 06:01:35 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:29.089 06:01:35 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:29.089 06:01:35 -- target/filesystem.sh@25 -- # sync 00:06:29.089 06:01:35 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:29.089 06:01:35 -- target/filesystem.sh@27 -- # sync 00:06:29.346 06:01:35 -- target/filesystem.sh@29 -- # i=0 00:06:29.346 06:01:35 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:29.346 06:01:35 -- target/filesystem.sh@37 -- # kill -0 1016965 00:06:29.346 06:01:35 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:29.346 06:01:35 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:29.346 06:01:35 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:29.346 06:01:35 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:29.346 00:06:29.346 real 0m4.106s 00:06:29.346 user 0m0.024s 00:06:29.346 sys 0m0.052s 00:06:29.346 06:01:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.346 06:01:35 -- common/autotest_common.sh@10 -- # set +x 00:06:29.346 ************************************ 00:06:29.346 END TEST filesystem_ext4 00:06:29.346 ************************************ 00:06:29.346 06:01:35 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:29.346 06:01:35 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:29.346 06:01:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.346 06:01:35 -- common/autotest_common.sh@10 -- # set +x 00:06:29.346 ************************************ 00:06:29.346 START TEST filesystem_btrfs 00:06:29.346 ************************************ 00:06:29.346 06:01:35 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:29.346 06:01:35 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:29.346 06:01:35 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:29.346 06:01:35 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:29.346 06:01:35 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:06:29.346 06:01:35 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:29.346 06:01:35 -- common/autotest_common.sh@904 -- # local i=0 00:06:29.346 06:01:35 -- common/autotest_common.sh@905 -- # local force 00:06:29.346 06:01:35 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:06:29.346 06:01:35 -- common/autotest_common.sh@910 -- # force=-f 00:06:29.346 06:01:35 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:29.602 btrfs-progs v6.6.2 00:06:29.602 See https://btrfs.readthedocs.io for more information. 00:06:29.602 00:06:29.602 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:29.602 NOTE: several default settings have changed in version 5.15, please make sure 00:06:29.602 this does not affect your deployments: 00:06:29.602 - DUP for metadata (-m dup) 00:06:29.602 - enabled no-holes (-O no-holes) 00:06:29.602 - enabled free-space-tree (-R free-space-tree) 00:06:29.602 00:06:29.602 Label: (null) 00:06:29.602 UUID: e4e54965-493d-44a8-9abb-5ea141a0a4b0 00:06:29.602 Node size: 16384 00:06:29.602 Sector size: 4096 00:06:29.602 Filesystem size: 510.00MiB 00:06:29.602 Block group profiles: 00:06:29.602 Data: single 8.00MiB 00:06:29.602 Metadata: DUP 32.00MiB 00:06:29.602 System: DUP 8.00MiB 00:06:29.602 SSD detected: yes 00:06:29.602 Zoned device: no 00:06:29.602 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:29.602 Runtime features: free-space-tree 00:06:29.603 Checksum: crc32c 00:06:29.603 Number of devices: 1 00:06:29.603 Devices: 00:06:29.603 ID SIZE PATH 00:06:29.603 1 510.00MiB /dev/nvme0n1p1 00:06:29.603 00:06:29.603 06:01:36 -- common/autotest_common.sh@921 -- # return 0 00:06:29.603 06:01:36 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:30.533 06:01:36 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:30.533 06:01:36 -- target/filesystem.sh@25 -- # sync 00:06:30.533 06:01:36 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:30.534 06:01:36 -- target/filesystem.sh@27 -- # sync 00:06:30.534 06:01:36 -- target/filesystem.sh@29 -- # i=0 00:06:30.534 06:01:36 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:30.534 06:01:36 -- target/filesystem.sh@37 -- # kill -0 1016965 00:06:30.534 06:01:36 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:30.534 06:01:36 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:30.534 06:01:36 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:30.534 06:01:36 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:30.534 00:06:30.534 real 0m1.226s 00:06:30.534 user 0m0.021s 00:06:30.534 sys 0m0.113s 00:06:30.534 06:01:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.534 06:01:36 -- common/autotest_common.sh@10 -- # set +x 00:06:30.534 ************************************ 00:06:30.534 END TEST filesystem_btrfs 00:06:30.534 ************************************ 00:06:30.534 06:01:36 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:30.534 06:01:36 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:30.534 06:01:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.534 06:01:36 -- common/autotest_common.sh@10 -- # set +x 00:06:30.534 ************************************ 00:06:30.534 START TEST filesystem_xfs 00:06:30.534 ************************************ 00:06:30.534 06:01:36 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:06:30.534 06:01:36 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:30.534 06:01:36 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:30.534 06:01:36 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:30.534 06:01:36 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:06:30.534 06:01:36 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:30.534 06:01:36 -- common/autotest_common.sh@904 -- # local i=0 00:06:30.534 06:01:36 -- common/autotest_common.sh@905 -- # local force 00:06:30.534 06:01:36 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:06:30.534 06:01:36 -- common/autotest_common.sh@910 -- # force=-f 00:06:30.534 06:01:36 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:30.534 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:30.534 = sectsz=512 attr=2, projid32bit=1 00:06:30.534 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:30.534 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:30.534 data = bsize=4096 blocks=130560, imaxpct=25 00:06:30.534 = sunit=0 swidth=0 blks 00:06:30.534 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:30.534 log =internal log bsize=4096 blocks=16384, version=2 00:06:30.534 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:30.534 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:31.902 Discarding blocks...Done. 00:06:31.902 06:01:38 -- common/autotest_common.sh@921 -- # return 0 00:06:31.902 06:01:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:33.797 06:01:39 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:33.797 06:01:39 -- target/filesystem.sh@25 -- # sync 00:06:33.797 06:01:39 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:33.797 06:01:39 -- target/filesystem.sh@27 -- # sync 00:06:33.797 06:01:39 -- target/filesystem.sh@29 -- # i=0 00:06:33.797 06:01:39 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:33.797 06:01:39 -- target/filesystem.sh@37 -- # kill -0 1016965 00:06:33.797 06:01:39 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:33.797 06:01:39 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:33.797 06:01:39 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:33.797 06:01:39 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:33.797 00:06:33.797 real 0m2.950s 00:06:33.797 user 0m0.021s 00:06:33.797 sys 0m0.056s 00:06:33.797 06:01:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.797 06:01:39 -- common/autotest_common.sh@10 -- # set +x 00:06:33.797 ************************************ 00:06:33.797 END TEST filesystem_xfs 00:06:33.797 ************************************ 00:06:33.797 06:01:39 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:33.797 06:01:40 -- target/filesystem.sh@93 -- # sync 00:06:33.797 06:01:40 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:33.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:33.797 06:01:40 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:33.797 06:01:40 -- common/autotest_common.sh@1198 -- # local i=0 00:06:33.797 06:01:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:06:33.797 06:01:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:33.797 06:01:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:06:33.797 06:01:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:33.797 06:01:40 -- common/autotest_common.sh@1210 -- # return 0 00:06:33.797 06:01:40 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:33.797 06:01:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:33.797 06:01:40 -- common/autotest_common.sh@10 -- # set +x 00:06:33.797 06:01:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:33.797 06:01:40 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:33.797 06:01:40 -- target/filesystem.sh@101 -- # killprocess 1016965 00:06:33.797 06:01:40 -- common/autotest_common.sh@926 -- # '[' -z 1016965 ']' 00:06:33.797 06:01:40 -- common/autotest_common.sh@930 -- # kill -0 1016965 00:06:34.055 06:01:40 -- common/autotest_common.sh@931 -- # uname 00:06:34.055 06:01:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:34.055 06:01:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1016965 00:06:34.055 06:01:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:34.055 06:01:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:34.055 06:01:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1016965' 00:06:34.055 killing process with pid 1016965 00:06:34.055 06:01:40 -- common/autotest_common.sh@945 -- # kill 1016965 00:06:34.055 06:01:40 -- common/autotest_common.sh@950 -- # wait 1016965 00:06:34.312 06:01:40 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:34.312 00:06:34.312 real 0m15.082s 00:06:34.312 user 0m58.101s 00:06:34.312 sys 0m2.087s 00:06:34.570 06:01:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.570 06:01:40 -- common/autotest_common.sh@10 -- # set +x 00:06:34.570 ************************************ 00:06:34.570 END TEST nvmf_filesystem_no_in_capsule 00:06:34.570 ************************************ 00:06:34.570 06:01:40 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:34.570 06:01:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:34.570 06:01:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.570 06:01:40 -- common/autotest_common.sh@10 -- # set +x 00:06:34.570 ************************************ 00:06:34.570 START TEST nvmf_filesystem_in_capsule 00:06:34.570 ************************************ 00:06:34.570 06:01:40 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:06:34.570 06:01:40 -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:34.570 06:01:40 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:34.570 06:01:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:34.570 06:01:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:34.570 06:01:40 -- common/autotest_common.sh@10 -- # set +x 00:06:34.570 06:01:40 -- nvmf/common.sh@469 -- # nvmfpid=1018986 00:06:34.570 06:01:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:34.570 06:01:40 -- nvmf/common.sh@470 -- # waitforlisten 1018986 00:06:34.570 06:01:40 -- common/autotest_common.sh@819 -- # '[' -z 1018986 ']' 00:06:34.570 06:01:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.570 06:01:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:34.570 06:01:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.570 06:01:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:34.570 06:01:40 -- common/autotest_common.sh@10 -- # set +x 00:06:34.570 [2024-07-13 06:01:40.901304] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:34.570 [2024-07-13 06:01:40.901381] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.570 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.570 [2024-07-13 06:01:40.965112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.570 [2024-07-13 06:01:41.073984] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:34.570 [2024-07-13 06:01:41.074120] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:34.570 [2024-07-13 06:01:41.074136] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:34.570 [2024-07-13 06:01:41.074148] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:34.570 [2024-07-13 06:01:41.074200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.570 [2024-07-13 06:01:41.074257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.570 [2024-07-13 06:01:41.074323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.570 [2024-07-13 06:01:41.074325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.502 06:01:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.502 06:01:41 -- common/autotest_common.sh@852 -- # return 0 00:06:35.502 06:01:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:35.502 06:01:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:35.502 06:01:41 -- common/autotest_common.sh@10 -- # set +x 00:06:35.502 06:01:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:35.502 06:01:41 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:35.502 06:01:41 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:35.502 06:01:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:35.502 06:01:41 -- common/autotest_common.sh@10 -- # set +x 00:06:35.502 [2024-07-13 06:01:41.910463] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.502 06:01:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:35.502 06:01:41 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:35.502 06:01:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:35.502 06:01:41 -- common/autotest_common.sh@10 -- # set +x 00:06:35.760 Malloc1 00:06:35.760 06:01:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:35.760 06:01:42 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:35.760 06:01:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:35.760 06:01:42 -- common/autotest_common.sh@10 -- # set +x 00:06:35.760 06:01:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:35.760 06:01:42 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:35.760 06:01:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:35.760 06:01:42 -- common/autotest_common.sh@10 -- # set +x 00:06:35.760 06:01:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:35.760 06:01:42 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:35.760 06:01:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:35.760 06:01:42 -- common/autotest_common.sh@10 -- # set +x 00:06:35.760 [2024-07-13 06:01:42.092239] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:35.760 06:01:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:35.760 06:01:42 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:35.760 06:01:42 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:06:35.760 06:01:42 -- common/autotest_common.sh@1358 -- # local bdev_info 00:06:35.760 06:01:42 -- common/autotest_common.sh@1359 -- # local bs 00:06:35.760 06:01:42 -- common/autotest_common.sh@1360 -- # local nb 00:06:35.760 06:01:42 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:35.760 06:01:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:35.760 06:01:42 -- common/autotest_common.sh@10 -- # set +x 00:06:35.760 06:01:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:35.760 06:01:42 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:06:35.760 { 00:06:35.760 "name": "Malloc1", 00:06:35.760 "aliases": [ 00:06:35.760 "1c87c405-02d4-4300-ae25-ba82528ab3cd" 00:06:35.760 ], 00:06:35.760 "product_name": "Malloc disk", 00:06:35.760 "block_size": 512, 00:06:35.760 "num_blocks": 1048576, 00:06:35.760 "uuid": "1c87c405-02d4-4300-ae25-ba82528ab3cd", 00:06:35.760 "assigned_rate_limits": { 00:06:35.760 "rw_ios_per_sec": 0, 00:06:35.760 "rw_mbytes_per_sec": 0, 00:06:35.760 "r_mbytes_per_sec": 0, 00:06:35.760 "w_mbytes_per_sec": 0 00:06:35.760 }, 00:06:35.760 "claimed": true, 00:06:35.760 "claim_type": "exclusive_write", 00:06:35.760 "zoned": false, 00:06:35.760 "supported_io_types": { 00:06:35.760 "read": true, 00:06:35.760 "write": true, 00:06:35.760 "unmap": true, 00:06:35.760 "write_zeroes": true, 00:06:35.760 "flush": true, 00:06:35.760 "reset": true, 00:06:35.760 "compare": false, 00:06:35.760 "compare_and_write": false, 00:06:35.760 "abort": true, 00:06:35.760 "nvme_admin": false, 00:06:35.760 "nvme_io": false 00:06:35.760 }, 00:06:35.760 "memory_domains": [ 00:06:35.760 { 00:06:35.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.760 "dma_device_type": 2 00:06:35.760 } 00:06:35.760 ], 00:06:35.760 "driver_specific": {} 00:06:35.760 } 00:06:35.760 ]' 00:06:35.760 06:01:42 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:06:35.760 06:01:42 -- common/autotest_common.sh@1362 -- # bs=512 00:06:35.760 06:01:42 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:06:35.760 06:01:42 -- common/autotest_common.sh@1363 -- # nb=1048576 00:06:35.760 06:01:42 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:06:35.760 06:01:42 -- common/autotest_common.sh@1367 -- # echo 512 00:06:35.760 06:01:42 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:35.760 06:01:42 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:36.324 06:01:42 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:36.324 06:01:42 -- common/autotest_common.sh@1177 -- # local i=0 00:06:36.324 06:01:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:06:36.324 06:01:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:06:36.324 06:01:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:06:38.843 06:01:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:06:38.843 06:01:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:06:38.843 06:01:44 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:06:38.843 06:01:44 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:06:38.843 06:01:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:06:38.843 06:01:44 -- common/autotest_common.sh@1187 -- # return 0 00:06:38.843 06:01:44 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:38.843 06:01:44 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:38.843 06:01:44 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:38.843 06:01:44 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:38.843 06:01:44 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:38.843 06:01:44 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:38.843 06:01:44 -- setup/common.sh@80 -- # echo 536870912 00:06:38.843 06:01:44 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:38.843 06:01:44 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:38.843 06:01:44 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:38.843 06:01:44 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:38.843 06:01:45 -- target/filesystem.sh@69 -- # partprobe 00:06:39.100 06:01:45 -- target/filesystem.sh@70 -- # sleep 1 00:06:40.030 06:01:46 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:40.030 06:01:46 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:40.031 06:01:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:40.031 06:01:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.031 06:01:46 -- common/autotest_common.sh@10 -- # set +x 00:06:40.031 ************************************ 00:06:40.031 START TEST filesystem_in_capsule_ext4 00:06:40.031 ************************************ 00:06:40.031 06:01:46 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:40.031 06:01:46 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:40.031 06:01:46 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:40.031 06:01:46 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:40.031 06:01:46 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:06:40.031 06:01:46 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:40.031 06:01:46 -- common/autotest_common.sh@904 -- # local i=0 00:06:40.031 06:01:46 -- common/autotest_common.sh@905 -- # local force 00:06:40.031 06:01:46 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:06:40.031 06:01:46 -- common/autotest_common.sh@908 -- # force=-F 00:06:40.031 06:01:46 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:40.031 mke2fs 1.46.5 (30-Dec-2021) 00:06:40.287 Discarding device blocks: 0/522240 done 00:06:40.287 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:40.287 Filesystem UUID: dca265e2-4b71-4e79-97d6-c1d21c598f09 00:06:40.287 Superblock backups stored on blocks: 00:06:40.287 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:40.287 00:06:40.287 Allocating group tables: 0/64 done 00:06:40.288 Writing inode tables: 0/64 done 00:06:40.288 Creating journal (8192 blocks): done 00:06:40.288 Writing superblocks and filesystem accounting information: 0/64 done 00:06:40.288 00:06:40.288 06:01:46 -- common/autotest_common.sh@921 -- # return 0 00:06:40.288 06:01:46 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:40.851 06:01:47 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:40.851 06:01:47 -- target/filesystem.sh@25 -- # sync 00:06:40.851 06:01:47 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:40.851 06:01:47 -- target/filesystem.sh@27 -- # sync 00:06:40.851 06:01:47 -- target/filesystem.sh@29 -- # i=0 00:06:40.851 06:01:47 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:40.851 06:01:47 -- target/filesystem.sh@37 -- # kill -0 1018986 00:06:40.851 06:01:47 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:40.851 06:01:47 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:40.851 06:01:47 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:40.851 06:01:47 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:40.851 00:06:40.851 real 0m0.768s 00:06:40.851 user 0m0.020s 00:06:40.851 sys 0m0.048s 00:06:40.851 06:01:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.851 06:01:47 -- common/autotest_common.sh@10 -- # set +x 00:06:40.851 ************************************ 00:06:40.851 END TEST filesystem_in_capsule_ext4 00:06:40.851 ************************************ 00:06:40.851 06:01:47 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:40.851 06:01:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:40.851 06:01:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.851 06:01:47 -- common/autotest_common.sh@10 -- # set +x 00:06:40.851 ************************************ 00:06:40.851 START TEST filesystem_in_capsule_btrfs 00:06:40.851 ************************************ 00:06:40.851 06:01:47 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:40.851 06:01:47 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:40.851 06:01:47 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:40.851 06:01:47 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:40.851 06:01:47 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:06:40.851 06:01:47 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:40.851 06:01:47 -- common/autotest_common.sh@904 -- # local i=0 00:06:40.851 06:01:47 -- common/autotest_common.sh@905 -- # local force 00:06:40.851 06:01:47 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:06:40.851 06:01:47 -- common/autotest_common.sh@910 -- # force=-f 00:06:40.852 06:01:47 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:41.415 btrfs-progs v6.6.2 00:06:41.415 See https://btrfs.readthedocs.io for more information. 00:06:41.415 00:06:41.415 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:41.415 NOTE: several default settings have changed in version 5.15, please make sure 00:06:41.415 this does not affect your deployments: 00:06:41.415 - DUP for metadata (-m dup) 00:06:41.415 - enabled no-holes (-O no-holes) 00:06:41.415 - enabled free-space-tree (-R free-space-tree) 00:06:41.415 00:06:41.415 Label: (null) 00:06:41.415 UUID: 36b6f9f7-d9c4-4486-9d6f-869ae212b72f 00:06:41.415 Node size: 16384 00:06:41.415 Sector size: 4096 00:06:41.415 Filesystem size: 510.00MiB 00:06:41.415 Block group profiles: 00:06:41.415 Data: single 8.00MiB 00:06:41.415 Metadata: DUP 32.00MiB 00:06:41.415 System: DUP 8.00MiB 00:06:41.415 SSD detected: yes 00:06:41.415 Zoned device: no 00:06:41.415 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:41.415 Runtime features: free-space-tree 00:06:41.415 Checksum: crc32c 00:06:41.415 Number of devices: 1 00:06:41.415 Devices: 00:06:41.415 ID SIZE PATH 00:06:41.415 1 510.00MiB /dev/nvme0n1p1 00:06:41.415 00:06:41.415 06:01:47 -- common/autotest_common.sh@921 -- # return 0 00:06:41.415 06:01:47 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:42.347 06:01:48 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:42.347 06:01:48 -- target/filesystem.sh@25 -- # sync 00:06:42.347 06:01:48 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:42.347 06:01:48 -- target/filesystem.sh@27 -- # sync 00:06:42.347 06:01:48 -- target/filesystem.sh@29 -- # i=0 00:06:42.347 06:01:48 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:42.347 06:01:48 -- target/filesystem.sh@37 -- # kill -0 1018986 00:06:42.347 06:01:48 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:42.347 06:01:48 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:42.347 06:01:48 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:42.347 06:01:48 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:42.347 00:06:42.347 real 0m1.382s 00:06:42.347 user 0m0.021s 00:06:42.347 sys 0m0.115s 00:06:42.347 06:01:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.347 06:01:48 -- common/autotest_common.sh@10 -- # set +x 00:06:42.347 ************************************ 00:06:42.347 END TEST filesystem_in_capsule_btrfs 00:06:42.347 ************************************ 00:06:42.347 06:01:48 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:42.347 06:01:48 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:42.347 06:01:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.347 06:01:48 -- common/autotest_common.sh@10 -- # set +x 00:06:42.347 ************************************ 00:06:42.347 START TEST filesystem_in_capsule_xfs 00:06:42.347 ************************************ 00:06:42.347 06:01:48 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:06:42.347 06:01:48 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:42.347 06:01:48 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:42.347 06:01:48 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:42.347 06:01:48 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:06:42.347 06:01:48 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:42.347 06:01:48 -- common/autotest_common.sh@904 -- # local i=0 00:06:42.347 06:01:48 -- common/autotest_common.sh@905 -- # local force 00:06:42.347 06:01:48 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:06:42.347 06:01:48 -- common/autotest_common.sh@910 -- # force=-f 00:06:42.347 06:01:48 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:42.347 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:42.347 = sectsz=512 attr=2, projid32bit=1 00:06:42.347 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:42.347 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:42.347 data = bsize=4096 blocks=130560, imaxpct=25 00:06:42.347 = sunit=0 swidth=0 blks 00:06:42.347 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:42.347 log =internal log bsize=4096 blocks=16384, version=2 00:06:42.347 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:42.347 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:43.333 Discarding blocks...Done. 00:06:43.333 06:01:49 -- common/autotest_common.sh@921 -- # return 0 00:06:43.333 06:01:49 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:45.246 06:01:51 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:45.246 06:01:51 -- target/filesystem.sh@25 -- # sync 00:06:45.246 06:01:51 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:45.246 06:01:51 -- target/filesystem.sh@27 -- # sync 00:06:45.246 06:01:51 -- target/filesystem.sh@29 -- # i=0 00:06:45.246 06:01:51 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:45.246 06:01:51 -- target/filesystem.sh@37 -- # kill -0 1018986 00:06:45.246 06:01:51 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:45.246 06:01:51 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:45.246 06:01:51 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:45.246 06:01:51 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:45.246 00:06:45.246 real 0m2.984s 00:06:45.246 user 0m0.025s 00:06:45.246 sys 0m0.050s 00:06:45.246 06:01:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.246 06:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:45.246 ************************************ 00:06:45.246 END TEST filesystem_in_capsule_xfs 00:06:45.246 ************************************ 00:06:45.246 06:01:51 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:45.504 06:01:51 -- target/filesystem.sh@93 -- # sync 00:06:45.504 06:01:51 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:45.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:45.504 06:01:51 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:45.504 06:01:51 -- common/autotest_common.sh@1198 -- # local i=0 00:06:45.504 06:01:51 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:06:45.504 06:01:51 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:45.504 06:01:51 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:06:45.504 06:01:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:45.504 06:01:52 -- common/autotest_common.sh@1210 -- # return 0 00:06:45.504 06:01:52 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:45.504 06:01:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:45.504 06:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:45.761 06:01:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:45.761 06:01:52 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:45.761 06:01:52 -- target/filesystem.sh@101 -- # killprocess 1018986 00:06:45.761 06:01:52 -- common/autotest_common.sh@926 -- # '[' -z 1018986 ']' 00:06:45.761 06:01:52 -- common/autotest_common.sh@930 -- # kill -0 1018986 00:06:45.761 06:01:52 -- common/autotest_common.sh@931 -- # uname 00:06:45.761 06:01:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:45.761 06:01:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1018986 00:06:45.761 06:01:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:45.761 06:01:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:45.761 06:01:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1018986' 00:06:45.761 killing process with pid 1018986 00:06:45.761 06:01:52 -- common/autotest_common.sh@945 -- # kill 1018986 00:06:45.761 06:01:52 -- common/autotest_common.sh@950 -- # wait 1018986 00:06:46.328 06:01:52 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:46.328 00:06:46.328 real 0m11.678s 00:06:46.328 user 0m44.923s 00:06:46.328 sys 0m1.691s 00:06:46.328 06:01:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.328 06:01:52 -- common/autotest_common.sh@10 -- # set +x 00:06:46.328 ************************************ 00:06:46.328 END TEST nvmf_filesystem_in_capsule 00:06:46.328 ************************************ 00:06:46.328 06:01:52 -- target/filesystem.sh@108 -- # nvmftestfini 00:06:46.328 06:01:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:46.328 06:01:52 -- nvmf/common.sh@116 -- # sync 00:06:46.328 06:01:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:46.328 06:01:52 -- nvmf/common.sh@119 -- # set +e 00:06:46.329 06:01:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:46.329 06:01:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:46.329 rmmod nvme_tcp 00:06:46.329 rmmod nvme_fabrics 00:06:46.329 rmmod nvme_keyring 00:06:46.329 06:01:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:46.329 06:01:52 -- nvmf/common.sh@123 -- # set -e 00:06:46.329 06:01:52 -- nvmf/common.sh@124 -- # return 0 00:06:46.329 06:01:52 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:06:46.329 06:01:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:46.329 06:01:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:46.329 06:01:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:46.329 06:01:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:46.329 06:01:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:46.329 06:01:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.329 06:01:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:46.329 06:01:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.231 06:01:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:06:48.231 00:06:48.231 real 0m31.191s 00:06:48.231 user 1m43.932s 00:06:48.231 sys 0m5.311s 00:06:48.231 06:01:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.231 06:01:54 -- common/autotest_common.sh@10 -- # set +x 00:06:48.231 ************************************ 00:06:48.231 END TEST nvmf_filesystem 00:06:48.231 ************************************ 00:06:48.231 06:01:54 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:48.231 06:01:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:48.231 06:01:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.231 06:01:54 -- common/autotest_common.sh@10 -- # set +x 00:06:48.231 ************************************ 00:06:48.231 START TEST nvmf_discovery 00:06:48.231 ************************************ 00:06:48.232 06:01:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:48.232 * Looking for test storage... 00:06:48.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:48.232 06:01:54 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.232 06:01:54 -- nvmf/common.sh@7 -- # uname -s 00:06:48.232 06:01:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.232 06:01:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.232 06:01:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.232 06:01:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.232 06:01:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.232 06:01:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.232 06:01:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.232 06:01:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.232 06:01:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.232 06:01:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.232 06:01:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:48.232 06:01:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:48.232 06:01:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.232 06:01:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.232 06:01:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.232 06:01:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.232 06:01:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.232 06:01:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.232 06:01:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.232 06:01:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.232 06:01:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.232 06:01:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.232 06:01:54 -- paths/export.sh@5 -- # export PATH 00:06:48.232 06:01:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.232 06:01:54 -- nvmf/common.sh@46 -- # : 0 00:06:48.232 06:01:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:48.232 06:01:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:48.232 06:01:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:48.232 06:01:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.232 06:01:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.232 06:01:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:48.232 06:01:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:48.232 06:01:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:48.232 06:01:54 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:48.232 06:01:54 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:48.232 06:01:54 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:48.232 06:01:54 -- target/discovery.sh@15 -- # hash nvme 00:06:48.232 06:01:54 -- target/discovery.sh@20 -- # nvmftestinit 00:06:48.232 06:01:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:48.232 06:01:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.232 06:01:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:48.232 06:01:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:48.232 06:01:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:48.232 06:01:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.232 06:01:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:48.232 06:01:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.232 06:01:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:48.232 06:01:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:48.232 06:01:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:48.232 06:01:54 -- common/autotest_common.sh@10 -- # set +x 00:06:50.764 06:01:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:50.764 06:01:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:50.764 06:01:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:50.764 06:01:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:50.764 06:01:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:50.764 06:01:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:50.764 06:01:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:50.764 06:01:56 -- nvmf/common.sh@294 -- # net_devs=() 00:06:50.764 06:01:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:50.764 06:01:56 -- nvmf/common.sh@295 -- # e810=() 00:06:50.764 06:01:56 -- nvmf/common.sh@295 -- # local -ga e810 00:06:50.764 06:01:56 -- nvmf/common.sh@296 -- # x722=() 00:06:50.764 06:01:56 -- nvmf/common.sh@296 -- # local -ga x722 00:06:50.764 06:01:56 -- nvmf/common.sh@297 -- # mlx=() 00:06:50.764 06:01:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:50.764 06:01:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:50.764 06:01:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:50.764 06:01:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:50.764 06:01:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:50.764 06:01:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:50.764 06:01:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:50.764 06:01:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:50.764 06:01:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:50.764 06:01:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:50.764 06:01:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:50.764 06:01:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:50.764 06:01:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:50.764 06:01:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:50.764 06:01:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:50.764 06:01:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:50.764 06:01:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:50.764 06:01:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:50.764 06:01:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:50.764 06:01:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:50.764 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:50.764 06:01:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:50.764 06:01:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:50.764 06:01:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.764 06:01:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.764 06:01:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:50.764 06:01:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:50.764 06:01:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:50.764 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:50.764 06:01:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:50.764 06:01:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:50.764 06:01:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:50.764 06:01:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:50.764 06:01:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:50.764 06:01:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:50.764 06:01:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:50.764 06:01:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:50.764 06:01:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:50.764 06:01:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.764 06:01:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:50.764 06:01:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.764 06:01:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:50.764 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:50.764 06:01:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.764 06:01:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:50.764 06:01:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:50.764 06:01:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:50.764 06:01:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:50.764 06:01:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:50.764 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:50.764 06:01:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:50.764 06:01:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:50.764 06:01:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:50.764 06:01:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:50.764 06:01:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:50.764 06:01:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:50.764 06:01:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:50.764 06:01:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:50.764 06:01:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:50.765 06:01:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:50.765 06:01:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:50.765 06:01:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:50.765 06:01:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:50.765 06:01:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:50.765 06:01:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:50.765 06:01:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:50.765 06:01:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:50.765 06:01:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:50.765 06:01:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:50.765 06:01:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:50.765 06:01:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:50.765 06:01:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:50.765 06:01:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:50.765 06:01:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:50.765 06:01:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:50.765 06:01:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:50.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:06:50.765 00:06:50.765 --- 10.0.0.2 ping statistics --- 00:06:50.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.765 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:06:50.765 06:01:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:50.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:06:50.765 00:06:50.765 --- 10.0.0.1 ping statistics --- 00:06:50.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.765 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:06:50.765 06:01:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.765 06:01:56 -- nvmf/common.sh@410 -- # return 0 00:06:50.765 06:01:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:50.765 06:01:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.765 06:01:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:50.765 06:01:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:50.765 06:01:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.765 06:01:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:50.765 06:01:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:50.765 06:01:57 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:50.765 06:01:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:50.765 06:01:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:50.765 06:01:57 -- common/autotest_common.sh@10 -- # set +x 00:06:50.765 06:01:57 -- nvmf/common.sh@469 -- # nvmfpid=1022519 00:06:50.765 06:01:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:50.765 06:01:57 -- nvmf/common.sh@470 -- # waitforlisten 1022519 00:06:50.765 06:01:57 -- common/autotest_common.sh@819 -- # '[' -z 1022519 ']' 00:06:50.765 06:01:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.765 06:01:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:50.765 06:01:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.765 06:01:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:50.765 06:01:57 -- common/autotest_common.sh@10 -- # set +x 00:06:50.765 [2024-07-13 06:01:57.051779] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:50.765 [2024-07-13 06:01:57.051924] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.765 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.765 [2024-07-13 06:01:57.121771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.765 [2024-07-13 06:01:57.240690] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:50.765 [2024-07-13 06:01:57.240878] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.765 [2024-07-13 06:01:57.240908] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.765 [2024-07-13 06:01:57.240934] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.765 [2024-07-13 06:01:57.241003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.765 [2024-07-13 06:01:57.241061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.765 [2024-07-13 06:01:57.241113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.765 [2024-07-13 06:01:57.241116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.699 06:01:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:51.699 06:01:57 -- common/autotest_common.sh@852 -- # return 0 00:06:51.699 06:01:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:51.699 06:01:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:51.699 06:01:57 -- common/autotest_common.sh@10 -- # set +x 00:06:51.699 06:01:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.699 06:01:58 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:51.699 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.699 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.699 [2024-07-13 06:01:58.020356] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:51.699 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.699 06:01:58 -- target/discovery.sh@26 -- # seq 1 4 00:06:51.699 06:01:58 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:51.699 06:01:58 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:51.699 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.699 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.699 Null1 00:06:51.699 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.699 06:01:58 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:51.699 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.699 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.699 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.699 06:01:58 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:51.700 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.700 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.700 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.700 06:01:58 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:51.700 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.700 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.700 [2024-07-13 06:01:58.060630] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:51.700 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.700 06:01:58 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:51.700 06:01:58 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:51.700 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.700 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.700 Null2 00:06:51.700 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.700 06:01:58 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:51.700 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.700 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.700 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.700 06:01:58 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:51.700 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.700 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.700 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.700 06:01:58 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:51.700 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.700 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.700 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.700 06:01:58 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:51.700 06:01:58 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:51.700 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.700 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.700 Null3 00:06:51.700 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.700 06:01:58 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:51.700 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.700 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.700 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.700 06:01:58 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:51.700 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.700 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.700 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.700 06:01:58 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:51.700 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.700 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.700 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.700 06:01:58 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:51.700 06:01:58 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:51.700 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.700 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.700 Null4 00:06:51.700 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.700 06:01:58 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:51.700 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.700 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.700 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.700 06:01:58 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:51.700 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.700 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.700 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.700 06:01:58 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:51.700 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.700 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.700 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.700 06:01:58 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:51.700 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.700 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.700 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.700 06:01:58 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:51.700 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.700 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.700 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.700 06:01:58 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:51.958 00:06:51.958 Discovery Log Number of Records 6, Generation counter 6 00:06:51.958 =====Discovery Log Entry 0====== 00:06:51.958 trtype: tcp 00:06:51.959 adrfam: ipv4 00:06:51.959 subtype: current discovery subsystem 00:06:51.959 treq: not required 00:06:51.959 portid: 0 00:06:51.959 trsvcid: 4420 00:06:51.959 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:51.959 traddr: 10.0.0.2 00:06:51.959 eflags: explicit discovery connections, duplicate discovery information 00:06:51.959 sectype: none 00:06:51.959 =====Discovery Log Entry 1====== 00:06:51.959 trtype: tcp 00:06:51.959 adrfam: ipv4 00:06:51.959 subtype: nvme subsystem 00:06:51.959 treq: not required 00:06:51.959 portid: 0 00:06:51.959 trsvcid: 4420 00:06:51.959 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:51.959 traddr: 10.0.0.2 00:06:51.959 eflags: none 00:06:51.959 sectype: none 00:06:51.959 =====Discovery Log Entry 2====== 00:06:51.959 trtype: tcp 00:06:51.959 adrfam: ipv4 00:06:51.959 subtype: nvme subsystem 00:06:51.959 treq: not required 00:06:51.959 portid: 0 00:06:51.959 trsvcid: 4420 00:06:51.959 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:51.959 traddr: 10.0.0.2 00:06:51.959 eflags: none 00:06:51.959 sectype: none 00:06:51.959 =====Discovery Log Entry 3====== 00:06:51.959 trtype: tcp 00:06:51.959 adrfam: ipv4 00:06:51.959 subtype: nvme subsystem 00:06:51.959 treq: not required 00:06:51.959 portid: 0 00:06:51.959 trsvcid: 4420 00:06:51.959 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:51.959 traddr: 10.0.0.2 00:06:51.959 eflags: none 00:06:51.959 sectype: none 00:06:51.959 =====Discovery Log Entry 4====== 00:06:51.959 trtype: tcp 00:06:51.959 adrfam: ipv4 00:06:51.959 subtype: nvme subsystem 00:06:51.959 treq: not required 00:06:51.959 portid: 0 00:06:51.959 trsvcid: 4420 00:06:51.959 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:51.959 traddr: 10.0.0.2 00:06:51.959 eflags: none 00:06:51.959 sectype: none 00:06:51.959 =====Discovery Log Entry 5====== 00:06:51.959 trtype: tcp 00:06:51.959 adrfam: ipv4 00:06:51.959 subtype: discovery subsystem referral 00:06:51.959 treq: not required 00:06:51.959 portid: 0 00:06:51.959 trsvcid: 4430 00:06:51.959 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:51.959 traddr: 10.0.0.2 00:06:51.959 eflags: none 00:06:51.959 sectype: none 00:06:51.959 06:01:58 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:51.959 Perform nvmf subsystem discovery via RPC 00:06:51.959 06:01:58 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:51.959 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.959 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.959 [2024-07-13 06:01:58.305296] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:06:51.959 [ 00:06:51.959 { 00:06:51.959 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:51.959 "subtype": "Discovery", 00:06:51.959 "listen_addresses": [ 00:06:51.959 { 00:06:51.959 "transport": "TCP", 00:06:51.959 "trtype": "TCP", 00:06:51.959 "adrfam": "IPv4", 00:06:51.959 "traddr": "10.0.0.2", 00:06:51.959 "trsvcid": "4420" 00:06:51.959 } 00:06:51.959 ], 00:06:51.959 "allow_any_host": true, 00:06:51.959 "hosts": [] 00:06:51.959 }, 00:06:51.959 { 00:06:51.959 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:51.959 "subtype": "NVMe", 00:06:51.959 "listen_addresses": [ 00:06:51.959 { 00:06:51.959 "transport": "TCP", 00:06:51.959 "trtype": "TCP", 00:06:51.959 "adrfam": "IPv4", 00:06:51.959 "traddr": "10.0.0.2", 00:06:51.959 "trsvcid": "4420" 00:06:51.959 } 00:06:51.959 ], 00:06:51.959 "allow_any_host": true, 00:06:51.959 "hosts": [], 00:06:51.959 "serial_number": "SPDK00000000000001", 00:06:51.959 "model_number": "SPDK bdev Controller", 00:06:51.959 "max_namespaces": 32, 00:06:51.959 "min_cntlid": 1, 00:06:51.959 "max_cntlid": 65519, 00:06:51.959 "namespaces": [ 00:06:51.959 { 00:06:51.959 "nsid": 1, 00:06:51.959 "bdev_name": "Null1", 00:06:51.959 "name": "Null1", 00:06:51.959 "nguid": "B7F630C12E8B4F769434A4FDF138E0E1", 00:06:51.959 "uuid": "b7f630c1-2e8b-4f76-9434-a4fdf138e0e1" 00:06:51.959 } 00:06:51.959 ] 00:06:51.959 }, 00:06:51.959 { 00:06:51.959 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:51.959 "subtype": "NVMe", 00:06:51.959 "listen_addresses": [ 00:06:51.959 { 00:06:51.959 "transport": "TCP", 00:06:51.959 "trtype": "TCP", 00:06:51.959 "adrfam": "IPv4", 00:06:51.959 "traddr": "10.0.0.2", 00:06:51.959 "trsvcid": "4420" 00:06:51.959 } 00:06:51.959 ], 00:06:51.959 "allow_any_host": true, 00:06:51.959 "hosts": [], 00:06:51.959 "serial_number": "SPDK00000000000002", 00:06:51.959 "model_number": "SPDK bdev Controller", 00:06:51.959 "max_namespaces": 32, 00:06:51.959 "min_cntlid": 1, 00:06:51.959 "max_cntlid": 65519, 00:06:51.959 "namespaces": [ 00:06:51.959 { 00:06:51.959 "nsid": 1, 00:06:51.959 "bdev_name": "Null2", 00:06:51.959 "name": "Null2", 00:06:51.959 "nguid": "8986623BE72F4BBC92829A18D6384DE7", 00:06:51.959 "uuid": "8986623b-e72f-4bbc-9282-9a18d6384de7" 00:06:51.959 } 00:06:51.959 ] 00:06:51.959 }, 00:06:51.959 { 00:06:51.959 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:51.959 "subtype": "NVMe", 00:06:51.959 "listen_addresses": [ 00:06:51.959 { 00:06:51.959 "transport": "TCP", 00:06:51.959 "trtype": "TCP", 00:06:51.959 "adrfam": "IPv4", 00:06:51.959 "traddr": "10.0.0.2", 00:06:51.959 "trsvcid": "4420" 00:06:51.959 } 00:06:51.959 ], 00:06:51.959 "allow_any_host": true, 00:06:51.959 "hosts": [], 00:06:51.959 "serial_number": "SPDK00000000000003", 00:06:51.959 "model_number": "SPDK bdev Controller", 00:06:51.959 "max_namespaces": 32, 00:06:51.959 "min_cntlid": 1, 00:06:51.959 "max_cntlid": 65519, 00:06:51.959 "namespaces": [ 00:06:51.959 { 00:06:51.959 "nsid": 1, 00:06:51.959 "bdev_name": "Null3", 00:06:51.959 "name": "Null3", 00:06:51.959 "nguid": "CA585C7281EC4ACBBE217B31A00CF792", 00:06:51.959 "uuid": "ca585c72-81ec-4acb-be21-7b31a00cf792" 00:06:51.959 } 00:06:51.959 ] 00:06:51.959 }, 00:06:51.959 { 00:06:51.959 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:51.959 "subtype": "NVMe", 00:06:51.959 "listen_addresses": [ 00:06:51.959 { 00:06:51.959 "transport": "TCP", 00:06:51.959 "trtype": "TCP", 00:06:51.959 "adrfam": "IPv4", 00:06:51.959 "traddr": "10.0.0.2", 00:06:51.959 "trsvcid": "4420" 00:06:51.959 } 00:06:51.959 ], 00:06:51.959 "allow_any_host": true, 00:06:51.959 "hosts": [], 00:06:51.959 "serial_number": "SPDK00000000000004", 00:06:51.959 "model_number": "SPDK bdev Controller", 00:06:51.959 "max_namespaces": 32, 00:06:51.959 "min_cntlid": 1, 00:06:51.959 "max_cntlid": 65519, 00:06:51.959 "namespaces": [ 00:06:51.959 { 00:06:51.959 "nsid": 1, 00:06:51.959 "bdev_name": "Null4", 00:06:51.959 "name": "Null4", 00:06:51.959 "nguid": "D58BD70163134AE7957320C2D08FA35C", 00:06:51.959 "uuid": "d58bd701-6313-4ae7-9573-20c2d08fa35c" 00:06:51.959 } 00:06:51.959 ] 00:06:51.959 } 00:06:51.959 ] 00:06:51.959 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.959 06:01:58 -- target/discovery.sh@42 -- # seq 1 4 00:06:51.959 06:01:58 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:51.959 06:01:58 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:51.959 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.959 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.959 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.959 06:01:58 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:51.959 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.959 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.959 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.959 06:01:58 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:51.959 06:01:58 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:51.959 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.959 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.959 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.959 06:01:58 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:51.959 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.959 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.959 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.959 06:01:58 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:51.959 06:01:58 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:51.959 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.959 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.959 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.959 06:01:58 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:51.959 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.959 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.959 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.959 06:01:58 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:51.959 06:01:58 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:51.959 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.959 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.959 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.959 06:01:58 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:51.959 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.959 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.959 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.960 06:01:58 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:51.960 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.960 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.960 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.960 06:01:58 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:51.960 06:01:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:51.960 06:01:58 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:51.960 06:01:58 -- common/autotest_common.sh@10 -- # set +x 00:06:51.960 06:01:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:51.960 06:01:58 -- target/discovery.sh@49 -- # check_bdevs= 00:06:51.960 06:01:58 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:51.960 06:01:58 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:51.960 06:01:58 -- target/discovery.sh@57 -- # nvmftestfini 00:06:51.960 06:01:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:51.960 06:01:58 -- nvmf/common.sh@116 -- # sync 00:06:51.960 06:01:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:51.960 06:01:58 -- nvmf/common.sh@119 -- # set +e 00:06:51.960 06:01:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:51.960 06:01:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:51.960 rmmod nvme_tcp 00:06:51.960 rmmod nvme_fabrics 00:06:51.960 rmmod nvme_keyring 00:06:52.218 06:01:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:52.218 06:01:58 -- nvmf/common.sh@123 -- # set -e 00:06:52.218 06:01:58 -- nvmf/common.sh@124 -- # return 0 00:06:52.218 06:01:58 -- nvmf/common.sh@477 -- # '[' -n 1022519 ']' 00:06:52.218 06:01:58 -- nvmf/common.sh@478 -- # killprocess 1022519 00:06:52.218 06:01:58 -- common/autotest_common.sh@926 -- # '[' -z 1022519 ']' 00:06:52.218 06:01:58 -- common/autotest_common.sh@930 -- # kill -0 1022519 00:06:52.218 06:01:58 -- common/autotest_common.sh@931 -- # uname 00:06:52.218 06:01:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:52.218 06:01:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1022519 00:06:52.218 06:01:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:52.218 06:01:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:52.218 06:01:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1022519' 00:06:52.218 killing process with pid 1022519 00:06:52.218 06:01:58 -- common/autotest_common.sh@945 -- # kill 1022519 00:06:52.218 [2024-07-13 06:01:58.516027] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:06:52.218 06:01:58 -- common/autotest_common.sh@950 -- # wait 1022519 00:06:52.478 06:01:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:52.478 06:01:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:52.478 06:01:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:52.478 06:01:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:52.478 06:01:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:52.478 06:01:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.478 06:01:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:52.478 06:01:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.383 06:02:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:06:54.383 00:06:54.383 real 0m6.163s 00:06:54.383 user 0m7.039s 00:06:54.383 sys 0m1.925s 00:06:54.383 06:02:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.383 06:02:00 -- common/autotest_common.sh@10 -- # set +x 00:06:54.383 ************************************ 00:06:54.383 END TEST nvmf_discovery 00:06:54.383 ************************************ 00:06:54.383 06:02:00 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:54.383 06:02:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:54.383 06:02:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.383 06:02:00 -- common/autotest_common.sh@10 -- # set +x 00:06:54.383 ************************************ 00:06:54.383 START TEST nvmf_referrals 00:06:54.383 ************************************ 00:06:54.383 06:02:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:54.639 * Looking for test storage... 00:06:54.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:54.640 06:02:00 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:54.640 06:02:00 -- nvmf/common.sh@7 -- # uname -s 00:06:54.640 06:02:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.640 06:02:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.640 06:02:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.640 06:02:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.640 06:02:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.640 06:02:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.640 06:02:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.640 06:02:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.640 06:02:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.640 06:02:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:54.640 06:02:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:54.640 06:02:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:54.640 06:02:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.640 06:02:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:54.640 06:02:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:54.640 06:02:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:54.640 06:02:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.640 06:02:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.640 06:02:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.640 06:02:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.640 06:02:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.640 06:02:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.640 06:02:00 -- paths/export.sh@5 -- # export PATH 00:06:54.640 06:02:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.640 06:02:00 -- nvmf/common.sh@46 -- # : 0 00:06:54.640 06:02:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:54.640 06:02:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:54.640 06:02:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:54.640 06:02:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.640 06:02:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.640 06:02:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:54.640 06:02:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:54.640 06:02:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:54.640 06:02:00 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:54.640 06:02:00 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:54.640 06:02:00 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:54.640 06:02:00 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:54.640 06:02:00 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:54.640 06:02:00 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:54.640 06:02:00 -- target/referrals.sh@37 -- # nvmftestinit 00:06:54.640 06:02:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:54.640 06:02:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:54.640 06:02:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:54.640 06:02:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:54.640 06:02:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:54.640 06:02:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.640 06:02:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:54.640 06:02:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.640 06:02:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:54.640 06:02:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:54.640 06:02:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:54.640 06:02:00 -- common/autotest_common.sh@10 -- # set +x 00:06:56.537 06:02:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:56.537 06:02:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:56.537 06:02:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:56.537 06:02:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:56.537 06:02:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:56.537 06:02:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:56.537 06:02:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:56.537 06:02:02 -- nvmf/common.sh@294 -- # net_devs=() 00:06:56.537 06:02:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:56.537 06:02:02 -- nvmf/common.sh@295 -- # e810=() 00:06:56.537 06:02:02 -- nvmf/common.sh@295 -- # local -ga e810 00:06:56.537 06:02:02 -- nvmf/common.sh@296 -- # x722=() 00:06:56.537 06:02:02 -- nvmf/common.sh@296 -- # local -ga x722 00:06:56.537 06:02:02 -- nvmf/common.sh@297 -- # mlx=() 00:06:56.537 06:02:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:56.537 06:02:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:56.537 06:02:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:56.537 06:02:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:56.537 06:02:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:56.537 06:02:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:56.537 06:02:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:56.537 06:02:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:56.537 06:02:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:56.537 06:02:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:56.537 06:02:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:56.537 06:02:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:56.537 06:02:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:56.537 06:02:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:56.537 06:02:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:56.537 06:02:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:56.537 06:02:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:56.537 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:56.537 06:02:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:56.537 06:02:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:56.537 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:56.537 06:02:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:56.537 06:02:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:56.537 06:02:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.537 06:02:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:56.537 06:02:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.537 06:02:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:56.537 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:56.537 06:02:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.537 06:02:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:56.537 06:02:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:56.537 06:02:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:56.537 06:02:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:56.537 06:02:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:56.537 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:56.537 06:02:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:56.537 06:02:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:56.537 06:02:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:56.537 06:02:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:56.537 06:02:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:56.537 06:02:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:56.537 06:02:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:56.537 06:02:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:56.537 06:02:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:56.537 06:02:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:56.537 06:02:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:56.537 06:02:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:56.537 06:02:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:56.537 06:02:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:56.537 06:02:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:56.537 06:02:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:56.537 06:02:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:56.537 06:02:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:56.537 06:02:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:56.537 06:02:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:56.537 06:02:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:56.537 06:02:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:56.537 06:02:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:56.537 06:02:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:56.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:56.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:06:56.537 00:06:56.537 --- 10.0.0.2 ping statistics --- 00:06:56.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.537 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:06:56.537 06:02:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:56.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:56.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:06:56.537 00:06:56.537 --- 10.0.0.1 ping statistics --- 00:06:56.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:56.537 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:06:56.537 06:02:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:56.537 06:02:02 -- nvmf/common.sh@410 -- # return 0 00:06:56.537 06:02:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:56.537 06:02:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:56.537 06:02:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:56.537 06:02:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:56.537 06:02:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:56.537 06:02:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:56.537 06:02:02 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:56.537 06:02:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:56.537 06:02:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:56.537 06:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:56.537 06:02:02 -- nvmf/common.sh@469 -- # nvmfpid=1024636 00:06:56.537 06:02:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:56.537 06:02:02 -- nvmf/common.sh@470 -- # waitforlisten 1024636 00:06:56.537 06:02:02 -- common/autotest_common.sh@819 -- # '[' -z 1024636 ']' 00:06:56.537 06:02:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.537 06:02:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:56.537 06:02:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.537 06:02:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:56.537 06:02:02 -- common/autotest_common.sh@10 -- # set +x 00:06:56.537 [2024-07-13 06:02:03.002549] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:56.537 [2024-07-13 06:02:03.002623] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.537 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.794 [2024-07-13 06:02:03.068198] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.794 [2024-07-13 06:02:03.177547] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:56.794 [2024-07-13 06:02:03.177698] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:56.794 [2024-07-13 06:02:03.177716] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:56.794 [2024-07-13 06:02:03.177729] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:56.794 [2024-07-13 06:02:03.177789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.794 [2024-07-13 06:02:03.177848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.794 [2024-07-13 06:02:03.177916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.794 [2024-07-13 06:02:03.177920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.725 06:02:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:57.725 06:02:03 -- common/autotest_common.sh@852 -- # return 0 00:06:57.725 06:02:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:57.726 06:02:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:57.726 06:02:03 -- common/autotest_common.sh@10 -- # set +x 00:06:57.726 06:02:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:57.726 06:02:03 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:57.726 06:02:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.726 06:02:03 -- common/autotest_common.sh@10 -- # set +x 00:06:57.726 [2024-07-13 06:02:03.976404] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.726 06:02:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.726 06:02:03 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:57.726 06:02:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.726 06:02:03 -- common/autotest_common.sh@10 -- # set +x 00:06:57.726 [2024-07-13 06:02:03.988587] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:57.726 06:02:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.726 06:02:03 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:57.726 06:02:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.726 06:02:03 -- common/autotest_common.sh@10 -- # set +x 00:06:57.726 06:02:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.726 06:02:03 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:57.726 06:02:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.726 06:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:57.726 06:02:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.726 06:02:04 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:57.726 06:02:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.726 06:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:57.726 06:02:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.726 06:02:04 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:57.726 06:02:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.726 06:02:04 -- target/referrals.sh@48 -- # jq length 00:06:57.726 06:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:57.726 06:02:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.726 06:02:04 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:57.726 06:02:04 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:57.726 06:02:04 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:57.726 06:02:04 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:57.726 06:02:04 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:57.726 06:02:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.726 06:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:57.726 06:02:04 -- target/referrals.sh@21 -- # sort 00:06:57.726 06:02:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.726 06:02:04 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:57.726 06:02:04 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:57.726 06:02:04 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:57.726 06:02:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:57.726 06:02:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:57.726 06:02:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:57.726 06:02:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:57.726 06:02:04 -- target/referrals.sh@26 -- # sort 00:06:57.983 06:02:04 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:57.983 06:02:04 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:57.983 06:02:04 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:57.983 06:02:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.983 06:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:57.983 06:02:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.983 06:02:04 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:57.983 06:02:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.983 06:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:57.983 06:02:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.983 06:02:04 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:57.983 06:02:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.983 06:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:57.983 06:02:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.983 06:02:04 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:57.983 06:02:04 -- target/referrals.sh@56 -- # jq length 00:06:57.983 06:02:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.983 06:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:57.983 06:02:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.983 06:02:04 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:57.983 06:02:04 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:57.983 06:02:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:57.983 06:02:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:57.983 06:02:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:57.983 06:02:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:57.983 06:02:04 -- target/referrals.sh@26 -- # sort 00:06:57.983 06:02:04 -- target/referrals.sh@26 -- # echo 00:06:57.983 06:02:04 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:57.983 06:02:04 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:57.983 06:02:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.983 06:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:57.983 06:02:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.983 06:02:04 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:57.983 06:02:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.983 06:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:57.983 06:02:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.983 06:02:04 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:57.983 06:02:04 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:57.983 06:02:04 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:57.983 06:02:04 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:57.983 06:02:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.983 06:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:57.983 06:02:04 -- target/referrals.sh@21 -- # sort 00:06:57.983 06:02:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:58.241 06:02:04 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:58.241 06:02:04 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:58.241 06:02:04 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:58.241 06:02:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:58.241 06:02:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:58.241 06:02:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.241 06:02:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:58.241 06:02:04 -- target/referrals.sh@26 -- # sort 00:06:58.241 06:02:04 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:58.241 06:02:04 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:58.241 06:02:04 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:58.241 06:02:04 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:58.241 06:02:04 -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:58.241 06:02:04 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.241 06:02:04 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:58.499 06:02:04 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:58.499 06:02:04 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:58.499 06:02:04 -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:58.499 06:02:04 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:58.499 06:02:04 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.499 06:02:04 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:58.499 06:02:04 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:58.499 06:02:04 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:58.499 06:02:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:58.499 06:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:58.499 06:02:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:58.499 06:02:04 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:58.499 06:02:04 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:58.499 06:02:04 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:58.499 06:02:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:58.499 06:02:04 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:58.499 06:02:04 -- common/autotest_common.sh@10 -- # set +x 00:06:58.499 06:02:04 -- target/referrals.sh@21 -- # sort 00:06:58.499 06:02:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:58.499 06:02:04 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:58.499 06:02:04 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:58.499 06:02:04 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:58.499 06:02:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:58.499 06:02:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:58.499 06:02:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.499 06:02:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:58.499 06:02:04 -- target/referrals.sh@26 -- # sort 00:06:58.756 06:02:05 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:58.756 06:02:05 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:58.756 06:02:05 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:58.756 06:02:05 -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:58.756 06:02:05 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:58.756 06:02:05 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.756 06:02:05 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:58.756 06:02:05 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:58.756 06:02:05 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:58.756 06:02:05 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:58.756 06:02:05 -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:58.756 06:02:05 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:58.756 06:02:05 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:59.014 06:02:05 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:59.014 06:02:05 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:59.014 06:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.014 06:02:05 -- common/autotest_common.sh@10 -- # set +x 00:06:59.014 06:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.014 06:02:05 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:59.014 06:02:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.014 06:02:05 -- target/referrals.sh@82 -- # jq length 00:06:59.014 06:02:05 -- common/autotest_common.sh@10 -- # set +x 00:06:59.015 06:02:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.015 06:02:05 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:59.015 06:02:05 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:59.015 06:02:05 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:59.015 06:02:05 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:59.015 06:02:05 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:59.015 06:02:05 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:59.015 06:02:05 -- target/referrals.sh@26 -- # sort 00:06:59.015 06:02:05 -- target/referrals.sh@26 -- # echo 00:06:59.015 06:02:05 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:59.015 06:02:05 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:59.015 06:02:05 -- target/referrals.sh@86 -- # nvmftestfini 00:06:59.015 06:02:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:59.015 06:02:05 -- nvmf/common.sh@116 -- # sync 00:06:59.015 06:02:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:59.015 06:02:05 -- nvmf/common.sh@119 -- # set +e 00:06:59.015 06:02:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:59.015 06:02:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:59.015 rmmod nvme_tcp 00:06:59.015 rmmod nvme_fabrics 00:06:59.015 rmmod nvme_keyring 00:06:59.316 06:02:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:59.316 06:02:05 -- nvmf/common.sh@123 -- # set -e 00:06:59.316 06:02:05 -- nvmf/common.sh@124 -- # return 0 00:06:59.316 06:02:05 -- nvmf/common.sh@477 -- # '[' -n 1024636 ']' 00:06:59.316 06:02:05 -- nvmf/common.sh@478 -- # killprocess 1024636 00:06:59.316 06:02:05 -- common/autotest_common.sh@926 -- # '[' -z 1024636 ']' 00:06:59.317 06:02:05 -- common/autotest_common.sh@930 -- # kill -0 1024636 00:06:59.317 06:02:05 -- common/autotest_common.sh@931 -- # uname 00:06:59.317 06:02:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:59.317 06:02:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1024636 00:06:59.317 06:02:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:59.317 06:02:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:59.317 06:02:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1024636' 00:06:59.317 killing process with pid 1024636 00:06:59.317 06:02:05 -- common/autotest_common.sh@945 -- # kill 1024636 00:06:59.317 06:02:05 -- common/autotest_common.sh@950 -- # wait 1024636 00:06:59.575 06:02:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:59.575 06:02:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:59.575 06:02:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:59.575 06:02:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:59.575 06:02:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:59.575 06:02:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.575 06:02:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:59.575 06:02:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.480 06:02:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:01.480 00:07:01.480 real 0m7.038s 00:07:01.480 user 0m12.052s 00:07:01.480 sys 0m2.013s 00:07:01.480 06:02:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.480 06:02:07 -- common/autotest_common.sh@10 -- # set +x 00:07:01.480 ************************************ 00:07:01.480 END TEST nvmf_referrals 00:07:01.480 ************************************ 00:07:01.480 06:02:07 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:01.480 06:02:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:01.480 06:02:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.480 06:02:07 -- common/autotest_common.sh@10 -- # set +x 00:07:01.480 ************************************ 00:07:01.480 START TEST nvmf_connect_disconnect 00:07:01.480 ************************************ 00:07:01.480 06:02:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:01.480 * Looking for test storage... 00:07:01.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:01.480 06:02:07 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:01.480 06:02:07 -- nvmf/common.sh@7 -- # uname -s 00:07:01.480 06:02:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:01.480 06:02:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:01.480 06:02:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:01.480 06:02:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:01.480 06:02:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:01.480 06:02:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:01.480 06:02:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:01.480 06:02:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:01.480 06:02:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:01.480 06:02:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:01.480 06:02:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:01.480 06:02:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:01.480 06:02:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:01.480 06:02:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:01.480 06:02:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:01.480 06:02:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:01.480 06:02:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:01.480 06:02:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:01.480 06:02:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:01.480 06:02:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.480 06:02:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.480 06:02:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.480 06:02:07 -- paths/export.sh@5 -- # export PATH 00:07:01.480 06:02:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:01.480 06:02:07 -- nvmf/common.sh@46 -- # : 0 00:07:01.480 06:02:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:01.480 06:02:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:01.480 06:02:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:01.480 06:02:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:01.738 06:02:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:01.738 06:02:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:01.738 06:02:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:01.738 06:02:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:01.738 06:02:07 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:01.738 06:02:07 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:01.738 06:02:07 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:01.738 06:02:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:01.738 06:02:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:01.738 06:02:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:01.738 06:02:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:01.738 06:02:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:01.738 06:02:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.738 06:02:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:01.738 06:02:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:01.738 06:02:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:01.738 06:02:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:01.738 06:02:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:01.738 06:02:07 -- common/autotest_common.sh@10 -- # set +x 00:07:03.636 06:02:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:03.636 06:02:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:03.636 06:02:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:03.636 06:02:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:03.636 06:02:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:03.636 06:02:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:03.636 06:02:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:03.636 06:02:09 -- nvmf/common.sh@294 -- # net_devs=() 00:07:03.636 06:02:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:03.636 06:02:09 -- nvmf/common.sh@295 -- # e810=() 00:07:03.636 06:02:09 -- nvmf/common.sh@295 -- # local -ga e810 00:07:03.636 06:02:09 -- nvmf/common.sh@296 -- # x722=() 00:07:03.636 06:02:09 -- nvmf/common.sh@296 -- # local -ga x722 00:07:03.636 06:02:09 -- nvmf/common.sh@297 -- # mlx=() 00:07:03.636 06:02:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:03.636 06:02:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:03.636 06:02:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:03.636 06:02:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:03.636 06:02:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:03.636 06:02:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:03.636 06:02:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:03.636 06:02:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:03.636 06:02:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:03.636 06:02:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:03.636 06:02:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:03.636 06:02:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:03.636 06:02:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:03.636 06:02:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:03.636 06:02:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:03.636 06:02:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:03.636 06:02:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:03.636 06:02:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:03.636 06:02:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:03.636 06:02:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:03.636 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:03.636 06:02:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:03.636 06:02:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:03.636 06:02:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.636 06:02:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.636 06:02:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:03.636 06:02:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:03.636 06:02:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:03.636 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:03.636 06:02:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:03.636 06:02:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:03.636 06:02:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.636 06:02:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.636 06:02:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:03.636 06:02:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:03.636 06:02:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:03.636 06:02:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:03.636 06:02:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:03.636 06:02:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.636 06:02:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:03.636 06:02:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.636 06:02:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:03.636 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:03.636 06:02:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.636 06:02:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:03.636 06:02:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.636 06:02:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:03.636 06:02:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.636 06:02:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:03.636 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:03.636 06:02:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.636 06:02:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:03.636 06:02:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:03.636 06:02:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:03.636 06:02:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:03.636 06:02:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:03.636 06:02:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:03.636 06:02:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:03.636 06:02:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:03.636 06:02:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:03.636 06:02:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:03.636 06:02:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:03.636 06:02:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:03.636 06:02:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:03.636 06:02:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:03.636 06:02:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:03.636 06:02:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:03.636 06:02:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:03.636 06:02:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:03.636 06:02:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:03.636 06:02:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:03.636 06:02:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:03.636 06:02:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:03.636 06:02:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:03.636 06:02:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:03.636 06:02:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:03.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:03.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:07:03.636 00:07:03.636 --- 10.0.0.2 ping statistics --- 00:07:03.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.636 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:07:03.636 06:02:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:03.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:03.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:07:03.636 00:07:03.636 --- 10.0.0.1 ping statistics --- 00:07:03.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.636 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:07:03.636 06:02:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:03.636 06:02:10 -- nvmf/common.sh@410 -- # return 0 00:07:03.636 06:02:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:03.636 06:02:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:03.636 06:02:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:03.636 06:02:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:03.636 06:02:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:03.636 06:02:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:03.636 06:02:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:03.636 06:02:10 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:03.636 06:02:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:03.636 06:02:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:03.636 06:02:10 -- common/autotest_common.sh@10 -- # set +x 00:07:03.636 06:02:10 -- nvmf/common.sh@469 -- # nvmfpid=1027078 00:07:03.636 06:02:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:03.636 06:02:10 -- nvmf/common.sh@470 -- # waitforlisten 1027078 00:07:03.636 06:02:10 -- common/autotest_common.sh@819 -- # '[' -z 1027078 ']' 00:07:03.636 06:02:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.636 06:02:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:03.636 06:02:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.636 06:02:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:03.636 06:02:10 -- common/autotest_common.sh@10 -- # set +x 00:07:03.894 [2024-07-13 06:02:10.164656] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:03.894 [2024-07-13 06:02:10.164721] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.894 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.894 [2024-07-13 06:02:10.230164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.894 [2024-07-13 06:02:10.348882] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:03.894 [2024-07-13 06:02:10.349060] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:03.894 [2024-07-13 06:02:10.349080] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:03.894 [2024-07-13 06:02:10.349095] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:03.894 [2024-07-13 06:02:10.349173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.894 [2024-07-13 06:02:10.349201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.894 [2024-07-13 06:02:10.349261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.894 [2024-07-13 06:02:10.349264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.827 06:02:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:04.827 06:02:11 -- common/autotest_common.sh@852 -- # return 0 00:07:04.827 06:02:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:04.827 06:02:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:04.827 06:02:11 -- common/autotest_common.sh@10 -- # set +x 00:07:04.827 06:02:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.827 06:02:11 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:04.827 06:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:04.827 06:02:11 -- common/autotest_common.sh@10 -- # set +x 00:07:04.827 [2024-07-13 06:02:11.150447] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.827 06:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:04.827 06:02:11 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:04.827 06:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:04.827 06:02:11 -- common/autotest_common.sh@10 -- # set +x 00:07:04.827 06:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:04.827 06:02:11 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:04.827 06:02:11 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:04.827 06:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:04.827 06:02:11 -- common/autotest_common.sh@10 -- # set +x 00:07:04.827 06:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:04.827 06:02:11 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:04.827 06:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:04.827 06:02:11 -- common/autotest_common.sh@10 -- # set +x 00:07:04.827 06:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:04.827 06:02:11 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:04.827 06:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:04.827 06:02:11 -- common/autotest_common.sh@10 -- # set +x 00:07:04.827 [2024-07-13 06:02:11.203168] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.827 06:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:04.827 06:02:11 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:04.827 06:02:11 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:04.827 06:02:11 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:04.827 06:02:11 -- target/connect_disconnect.sh@34 -- # set +x 00:07:07.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:09.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:11.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:14.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:16.821 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:18.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:21.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:23.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:25.704 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:30.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:32.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:35.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:37.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:39.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:49.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.028 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.451 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:14.350 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:23.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:39.713 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.133 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.525 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.861 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.863 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.312 06:06:02 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:56.312 06:06:02 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:56.312 06:06:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:56.312 06:06:02 -- nvmf/common.sh@116 -- # sync 00:10:56.312 06:06:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:56.312 06:06:02 -- nvmf/common.sh@119 -- # set +e 00:10:56.312 06:06:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:56.312 06:06:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:56.312 rmmod nvme_tcp 00:10:56.312 rmmod nvme_fabrics 00:10:56.312 rmmod nvme_keyring 00:10:56.312 06:06:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:56.312 06:06:02 -- nvmf/common.sh@123 -- # set -e 00:10:56.312 06:06:02 -- nvmf/common.sh@124 -- # return 0 00:10:56.312 06:06:02 -- nvmf/common.sh@477 -- # '[' -n 1027078 ']' 00:10:56.312 06:06:02 -- nvmf/common.sh@478 -- # killprocess 1027078 00:10:56.312 06:06:02 -- common/autotest_common.sh@926 -- # '[' -z 1027078 ']' 00:10:56.312 06:06:02 -- common/autotest_common.sh@930 -- # kill -0 1027078 00:10:56.312 06:06:02 -- common/autotest_common.sh@931 -- # uname 00:10:56.312 06:06:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:56.312 06:06:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1027078 00:10:56.312 06:06:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:56.312 06:06:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:56.312 06:06:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1027078' 00:10:56.312 killing process with pid 1027078 00:10:56.312 06:06:02 -- common/autotest_common.sh@945 -- # kill 1027078 00:10:56.312 06:06:02 -- common/autotest_common.sh@950 -- # wait 1027078 00:10:56.312 06:06:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:56.312 06:06:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:56.312 06:06:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:56.312 06:06:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.312 06:06:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:56.312 06:06:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.312 06:06:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:56.312 06:06:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.210 06:06:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:10:58.210 00:10:58.210 real 3m56.797s 00:10:58.210 user 15m1.859s 00:10:58.210 sys 0m35.080s 00:10:58.468 06:06:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:58.468 06:06:04 -- common/autotest_common.sh@10 -- # set +x 00:10:58.468 ************************************ 00:10:58.468 END TEST nvmf_connect_disconnect 00:10:58.468 ************************************ 00:10:58.468 06:06:04 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:58.468 06:06:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:58.468 06:06:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:58.468 06:06:04 -- common/autotest_common.sh@10 -- # set +x 00:10:58.468 ************************************ 00:10:58.468 START TEST nvmf_multitarget 00:10:58.468 ************************************ 00:10:58.468 06:06:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:58.468 * Looking for test storage... 00:10:58.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.468 06:06:04 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.468 06:06:04 -- nvmf/common.sh@7 -- # uname -s 00:10:58.468 06:06:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.468 06:06:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.468 06:06:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.468 06:06:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.468 06:06:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.468 06:06:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.468 06:06:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.468 06:06:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.468 06:06:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.468 06:06:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.468 06:06:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:58.468 06:06:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:58.468 06:06:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.469 06:06:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.469 06:06:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.469 06:06:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.469 06:06:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.469 06:06:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.469 06:06:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.469 06:06:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.469 06:06:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.469 06:06:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.469 06:06:04 -- paths/export.sh@5 -- # export PATH 00:10:58.469 06:06:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.469 06:06:04 -- nvmf/common.sh@46 -- # : 0 00:10:58.469 06:06:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:58.469 06:06:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:58.469 06:06:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:58.469 06:06:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.469 06:06:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.469 06:06:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:58.469 06:06:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:58.469 06:06:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:58.469 06:06:04 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:58.469 06:06:04 -- target/multitarget.sh@15 -- # nvmftestinit 00:10:58.469 06:06:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:58.469 06:06:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.469 06:06:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:58.469 06:06:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:58.469 06:06:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:58.469 06:06:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.469 06:06:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.469 06:06:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.469 06:06:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:10:58.469 06:06:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:10:58.469 06:06:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:10:58.469 06:06:04 -- common/autotest_common.sh@10 -- # set +x 00:11:00.370 06:06:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:00.370 06:06:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:00.370 06:06:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:00.370 06:06:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:00.370 06:06:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:00.370 06:06:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:00.370 06:06:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:00.370 06:06:06 -- nvmf/common.sh@294 -- # net_devs=() 00:11:00.370 06:06:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:00.370 06:06:06 -- nvmf/common.sh@295 -- # e810=() 00:11:00.370 06:06:06 -- nvmf/common.sh@295 -- # local -ga e810 00:11:00.370 06:06:06 -- nvmf/common.sh@296 -- # x722=() 00:11:00.370 06:06:06 -- nvmf/common.sh@296 -- # local -ga x722 00:11:00.370 06:06:06 -- nvmf/common.sh@297 -- # mlx=() 00:11:00.370 06:06:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:00.370 06:06:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.370 06:06:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.370 06:06:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.370 06:06:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.370 06:06:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.370 06:06:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.370 06:06:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.370 06:06:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.370 06:06:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.370 06:06:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.370 06:06:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.370 06:06:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:00.370 06:06:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:00.370 06:06:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:00.370 06:06:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:00.370 06:06:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:00.370 06:06:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:00.370 06:06:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:00.370 06:06:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:00.370 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:00.370 06:06:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:00.370 06:06:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:00.370 06:06:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.370 06:06:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.370 06:06:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:00.370 06:06:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:00.370 06:06:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:00.370 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:00.370 06:06:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:00.370 06:06:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:00.370 06:06:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.370 06:06:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.370 06:06:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:00.370 06:06:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:00.370 06:06:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:00.370 06:06:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:00.370 06:06:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:00.370 06:06:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.370 06:06:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:00.371 06:06:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.371 06:06:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:00.371 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:00.371 06:06:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.371 06:06:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:00.371 06:06:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.371 06:06:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:00.371 06:06:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.371 06:06:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:00.371 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:00.371 06:06:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.371 06:06:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:00.371 06:06:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:00.371 06:06:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:00.371 06:06:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:00.371 06:06:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:00.371 06:06:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.371 06:06:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.371 06:06:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:00.371 06:06:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:00.371 06:06:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:00.371 06:06:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:00.371 06:06:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:00.371 06:06:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:00.371 06:06:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.371 06:06:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:00.371 06:06:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:00.371 06:06:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:00.371 06:06:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:00.371 06:06:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:00.371 06:06:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:00.371 06:06:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:00.371 06:06:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:00.629 06:06:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:00.629 06:06:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:00.629 06:06:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:00.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:11:00.629 00:11:00.629 --- 10.0.0.2 ping statistics --- 00:11:00.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.629 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:11:00.629 06:06:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:00.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:11:00.629 00:11:00.629 --- 10.0.0.1 ping statistics --- 00:11:00.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.629 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:11:00.629 06:06:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.629 06:06:06 -- nvmf/common.sh@410 -- # return 0 00:11:00.629 06:06:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:00.629 06:06:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.629 06:06:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:00.629 06:06:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:00.629 06:06:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.629 06:06:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:00.629 06:06:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:00.629 06:06:06 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:00.629 06:06:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:00.629 06:06:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:00.629 06:06:06 -- common/autotest_common.sh@10 -- # set +x 00:11:00.629 06:06:06 -- nvmf/common.sh@469 -- # nvmfpid=1059211 00:11:00.629 06:06:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:00.629 06:06:06 -- nvmf/common.sh@470 -- # waitforlisten 1059211 00:11:00.629 06:06:06 -- common/autotest_common.sh@819 -- # '[' -z 1059211 ']' 00:11:00.629 06:06:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.629 06:06:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:00.630 06:06:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.630 06:06:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:00.630 06:06:06 -- common/autotest_common.sh@10 -- # set +x 00:11:00.630 [2024-07-13 06:06:06.970348] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:00.630 [2024-07-13 06:06:06.970411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.630 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.630 [2024-07-13 06:06:07.040470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.888 [2024-07-13 06:06:07.164065] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:00.888 [2024-07-13 06:06:07.164225] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.888 [2024-07-13 06:06:07.164245] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.888 [2024-07-13 06:06:07.164267] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.888 [2024-07-13 06:06:07.164325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:00.888 [2024-07-13 06:06:07.164383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:00.888 [2024-07-13 06:06:07.164406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.888 [2024-07-13 06:06:07.164409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.481 06:06:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:01.481 06:06:07 -- common/autotest_common.sh@852 -- # return 0 00:11:01.481 06:06:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:01.481 06:06:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:01.481 06:06:07 -- common/autotest_common.sh@10 -- # set +x 00:11:01.481 06:06:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:01.481 06:06:07 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:01.481 06:06:07 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:01.481 06:06:07 -- target/multitarget.sh@21 -- # jq length 00:11:01.738 06:06:08 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:01.738 06:06:08 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:01.738 "nvmf_tgt_1" 00:11:01.738 06:06:08 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:01.995 "nvmf_tgt_2" 00:11:01.995 06:06:08 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:01.995 06:06:08 -- target/multitarget.sh@28 -- # jq length 00:11:01.995 06:06:08 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:01.995 06:06:08 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:01.995 true 00:11:01.995 06:06:08 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:02.252 true 00:11:02.252 06:06:08 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:02.252 06:06:08 -- target/multitarget.sh@35 -- # jq length 00:11:02.252 06:06:08 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:02.252 06:06:08 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:02.252 06:06:08 -- target/multitarget.sh@41 -- # nvmftestfini 00:11:02.252 06:06:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:02.252 06:06:08 -- nvmf/common.sh@116 -- # sync 00:11:02.252 06:06:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:02.252 06:06:08 -- nvmf/common.sh@119 -- # set +e 00:11:02.252 06:06:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:02.252 06:06:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:02.252 rmmod nvme_tcp 00:11:02.252 rmmod nvme_fabrics 00:11:02.252 rmmod nvme_keyring 00:11:02.252 06:06:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:02.252 06:06:08 -- nvmf/common.sh@123 -- # set -e 00:11:02.252 06:06:08 -- nvmf/common.sh@124 -- # return 0 00:11:02.252 06:06:08 -- nvmf/common.sh@477 -- # '[' -n 1059211 ']' 00:11:02.252 06:06:08 -- nvmf/common.sh@478 -- # killprocess 1059211 00:11:02.252 06:06:08 -- common/autotest_common.sh@926 -- # '[' -z 1059211 ']' 00:11:02.252 06:06:08 -- common/autotest_common.sh@930 -- # kill -0 1059211 00:11:02.509 06:06:08 -- common/autotest_common.sh@931 -- # uname 00:11:02.509 06:06:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:02.509 06:06:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1059211 00:11:02.509 06:06:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:02.509 06:06:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:02.509 06:06:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1059211' 00:11:02.509 killing process with pid 1059211 00:11:02.509 06:06:08 -- common/autotest_common.sh@945 -- # kill 1059211 00:11:02.509 06:06:08 -- common/autotest_common.sh@950 -- # wait 1059211 00:11:02.766 06:06:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:02.766 06:06:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:02.766 06:06:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:02.766 06:06:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:02.766 06:06:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:02.766 06:06:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.766 06:06:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:02.766 06:06:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.666 06:06:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:04.666 00:11:04.666 real 0m6.353s 00:11:04.666 user 0m9.098s 00:11:04.666 sys 0m1.893s 00:11:04.666 06:06:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:04.666 06:06:11 -- common/autotest_common.sh@10 -- # set +x 00:11:04.666 ************************************ 00:11:04.666 END TEST nvmf_multitarget 00:11:04.666 ************************************ 00:11:04.666 06:06:11 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:04.666 06:06:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:04.666 06:06:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:04.666 06:06:11 -- common/autotest_common.sh@10 -- # set +x 00:11:04.666 ************************************ 00:11:04.666 START TEST nvmf_rpc 00:11:04.666 ************************************ 00:11:04.666 06:06:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:04.666 * Looking for test storage... 00:11:04.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.934 06:06:11 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.934 06:06:11 -- nvmf/common.sh@7 -- # uname -s 00:11:04.934 06:06:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.934 06:06:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.934 06:06:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.934 06:06:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.934 06:06:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.934 06:06:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.934 06:06:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.934 06:06:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.934 06:06:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.934 06:06:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.934 06:06:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:04.934 06:06:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:04.934 06:06:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.934 06:06:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.934 06:06:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.934 06:06:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:04.934 06:06:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.934 06:06:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.934 06:06:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.934 06:06:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.934 06:06:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.934 06:06:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.934 06:06:11 -- paths/export.sh@5 -- # export PATH 00:11:04.934 06:06:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.934 06:06:11 -- nvmf/common.sh@46 -- # : 0 00:11:04.934 06:06:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:04.934 06:06:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:04.934 06:06:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:04.934 06:06:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.934 06:06:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.934 06:06:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:04.934 06:06:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:04.934 06:06:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:04.934 06:06:11 -- target/rpc.sh@11 -- # loops=5 00:11:04.934 06:06:11 -- target/rpc.sh@23 -- # nvmftestinit 00:11:04.934 06:06:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:04.935 06:06:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.935 06:06:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:04.935 06:06:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:04.935 06:06:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:04.935 06:06:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.935 06:06:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.935 06:06:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.935 06:06:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:04.935 06:06:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:04.935 06:06:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:04.935 06:06:11 -- common/autotest_common.sh@10 -- # set +x 00:11:06.838 06:06:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:06.838 06:06:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:06.838 06:06:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:06.838 06:06:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:06.838 06:06:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:06.838 06:06:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:06.838 06:06:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:06.838 06:06:13 -- nvmf/common.sh@294 -- # net_devs=() 00:11:06.838 06:06:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:06.838 06:06:13 -- nvmf/common.sh@295 -- # e810=() 00:11:06.838 06:06:13 -- nvmf/common.sh@295 -- # local -ga e810 00:11:06.838 06:06:13 -- nvmf/common.sh@296 -- # x722=() 00:11:06.838 06:06:13 -- nvmf/common.sh@296 -- # local -ga x722 00:11:06.838 06:06:13 -- nvmf/common.sh@297 -- # mlx=() 00:11:06.838 06:06:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:06.838 06:06:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:06.838 06:06:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:06.838 06:06:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:06.838 06:06:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:06.838 06:06:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:06.838 06:06:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:06.838 06:06:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:06.838 06:06:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:06.838 06:06:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:06.838 06:06:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:06.838 06:06:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:06.838 06:06:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:06.838 06:06:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:06.838 06:06:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:06.838 06:06:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:06.838 06:06:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:06.838 06:06:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:06.838 06:06:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:06.838 06:06:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:06.838 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:06.838 06:06:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:06.838 06:06:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:06.838 06:06:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.838 06:06:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.838 06:06:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:06.838 06:06:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:06.838 06:06:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:06.838 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:06.838 06:06:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:06.838 06:06:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:06.838 06:06:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:06.838 06:06:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:06.838 06:06:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:06.838 06:06:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:06.838 06:06:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:06.838 06:06:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:06.838 06:06:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:06.838 06:06:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.838 06:06:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:06.838 06:06:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.838 06:06:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:06.838 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:06.838 06:06:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.838 06:06:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:06.838 06:06:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:06.838 06:06:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:06.838 06:06:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:06.838 06:06:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:06.838 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:06.838 06:06:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:06.838 06:06:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:06.838 06:06:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:06.838 06:06:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:06.838 06:06:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:06.838 06:06:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:06.838 06:06:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.838 06:06:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:06.838 06:06:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:06.839 06:06:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:06.839 06:06:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:06.839 06:06:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:06.839 06:06:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:06.839 06:06:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:06.839 06:06:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.839 06:06:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:06.839 06:06:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:06.839 06:06:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:06.839 06:06:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:06.839 06:06:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:06.839 06:06:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:06.839 06:06:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:06.839 06:06:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:07.097 06:06:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:07.097 06:06:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:07.097 06:06:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:07.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:11:07.097 00:11:07.097 --- 10.0.0.2 ping statistics --- 00:11:07.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.097 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:11:07.097 06:06:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:07.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:11:07.097 00:11:07.097 --- 10.0.0.1 ping statistics --- 00:11:07.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.097 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:11:07.097 06:06:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.097 06:06:13 -- nvmf/common.sh@410 -- # return 0 00:11:07.097 06:06:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:07.097 06:06:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.097 06:06:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:07.097 06:06:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:07.097 06:06:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.097 06:06:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:07.097 06:06:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:07.097 06:06:13 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:07.097 06:06:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:07.097 06:06:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:07.097 06:06:13 -- common/autotest_common.sh@10 -- # set +x 00:11:07.097 06:06:13 -- nvmf/common.sh@469 -- # nvmfpid=1061962 00:11:07.097 06:06:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:07.097 06:06:13 -- nvmf/common.sh@470 -- # waitforlisten 1061962 00:11:07.097 06:06:13 -- common/autotest_common.sh@819 -- # '[' -z 1061962 ']' 00:11:07.097 06:06:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.097 06:06:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:07.097 06:06:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.097 06:06:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:07.097 06:06:13 -- common/autotest_common.sh@10 -- # set +x 00:11:07.097 [2024-07-13 06:06:13.460343] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:07.097 [2024-07-13 06:06:13.460421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.097 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.097 [2024-07-13 06:06:13.525098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.355 [2024-07-13 06:06:13.636092] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:07.355 [2024-07-13 06:06:13.636255] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.355 [2024-07-13 06:06:13.636272] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.355 [2024-07-13 06:06:13.636285] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.355 [2024-07-13 06:06:13.636368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.355 [2024-07-13 06:06:13.636391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.355 [2024-07-13 06:06:13.636464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.355 [2024-07-13 06:06:13.636466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.286 06:06:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:08.286 06:06:14 -- common/autotest_common.sh@852 -- # return 0 00:11:08.286 06:06:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:08.286 06:06:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:08.286 06:06:14 -- common/autotest_common.sh@10 -- # set +x 00:11:08.286 06:06:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.286 06:06:14 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:08.286 06:06:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:08.286 06:06:14 -- common/autotest_common.sh@10 -- # set +x 00:11:08.286 06:06:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:08.286 06:06:14 -- target/rpc.sh@26 -- # stats='{ 00:11:08.286 "tick_rate": 2700000000, 00:11:08.286 "poll_groups": [ 00:11:08.286 { 00:11:08.286 "name": "nvmf_tgt_poll_group_0", 00:11:08.286 "admin_qpairs": 0, 00:11:08.286 "io_qpairs": 0, 00:11:08.286 "current_admin_qpairs": 0, 00:11:08.286 "current_io_qpairs": 0, 00:11:08.286 "pending_bdev_io": 0, 00:11:08.286 "completed_nvme_io": 0, 00:11:08.286 "transports": [] 00:11:08.286 }, 00:11:08.286 { 00:11:08.286 "name": "nvmf_tgt_poll_group_1", 00:11:08.286 "admin_qpairs": 0, 00:11:08.286 "io_qpairs": 0, 00:11:08.286 "current_admin_qpairs": 0, 00:11:08.286 "current_io_qpairs": 0, 00:11:08.286 "pending_bdev_io": 0, 00:11:08.286 "completed_nvme_io": 0, 00:11:08.286 "transports": [] 00:11:08.286 }, 00:11:08.286 { 00:11:08.286 "name": "nvmf_tgt_poll_group_2", 00:11:08.286 "admin_qpairs": 0, 00:11:08.286 "io_qpairs": 0, 00:11:08.287 "current_admin_qpairs": 0, 00:11:08.287 "current_io_qpairs": 0, 00:11:08.287 "pending_bdev_io": 0, 00:11:08.287 "completed_nvme_io": 0, 00:11:08.287 "transports": [] 00:11:08.287 }, 00:11:08.287 { 00:11:08.287 "name": "nvmf_tgt_poll_group_3", 00:11:08.287 "admin_qpairs": 0, 00:11:08.287 "io_qpairs": 0, 00:11:08.287 "current_admin_qpairs": 0, 00:11:08.287 "current_io_qpairs": 0, 00:11:08.287 "pending_bdev_io": 0, 00:11:08.287 "completed_nvme_io": 0, 00:11:08.287 "transports": [] 00:11:08.287 } 00:11:08.287 ] 00:11:08.287 }' 00:11:08.287 06:06:14 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:08.287 06:06:14 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:08.287 06:06:14 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:08.287 06:06:14 -- target/rpc.sh@15 -- # wc -l 00:11:08.287 06:06:14 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:08.287 06:06:14 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:08.287 06:06:14 -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:08.287 06:06:14 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:08.287 06:06:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:08.287 06:06:14 -- common/autotest_common.sh@10 -- # set +x 00:11:08.287 [2024-07-13 06:06:14.572823] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.287 06:06:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:08.287 06:06:14 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:08.287 06:06:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:08.287 06:06:14 -- common/autotest_common.sh@10 -- # set +x 00:11:08.287 06:06:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:08.287 06:06:14 -- target/rpc.sh@33 -- # stats='{ 00:11:08.287 "tick_rate": 2700000000, 00:11:08.287 "poll_groups": [ 00:11:08.287 { 00:11:08.287 "name": "nvmf_tgt_poll_group_0", 00:11:08.287 "admin_qpairs": 0, 00:11:08.287 "io_qpairs": 0, 00:11:08.287 "current_admin_qpairs": 0, 00:11:08.287 "current_io_qpairs": 0, 00:11:08.287 "pending_bdev_io": 0, 00:11:08.287 "completed_nvme_io": 0, 00:11:08.287 "transports": [ 00:11:08.287 { 00:11:08.287 "trtype": "TCP" 00:11:08.287 } 00:11:08.287 ] 00:11:08.287 }, 00:11:08.287 { 00:11:08.287 "name": "nvmf_tgt_poll_group_1", 00:11:08.287 "admin_qpairs": 0, 00:11:08.287 "io_qpairs": 0, 00:11:08.287 "current_admin_qpairs": 0, 00:11:08.287 "current_io_qpairs": 0, 00:11:08.287 "pending_bdev_io": 0, 00:11:08.287 "completed_nvme_io": 0, 00:11:08.287 "transports": [ 00:11:08.287 { 00:11:08.287 "trtype": "TCP" 00:11:08.287 } 00:11:08.287 ] 00:11:08.287 }, 00:11:08.287 { 00:11:08.287 "name": "nvmf_tgt_poll_group_2", 00:11:08.287 "admin_qpairs": 0, 00:11:08.287 "io_qpairs": 0, 00:11:08.287 "current_admin_qpairs": 0, 00:11:08.287 "current_io_qpairs": 0, 00:11:08.287 "pending_bdev_io": 0, 00:11:08.287 "completed_nvme_io": 0, 00:11:08.287 "transports": [ 00:11:08.287 { 00:11:08.287 "trtype": "TCP" 00:11:08.287 } 00:11:08.287 ] 00:11:08.287 }, 00:11:08.287 { 00:11:08.287 "name": "nvmf_tgt_poll_group_3", 00:11:08.287 "admin_qpairs": 0, 00:11:08.287 "io_qpairs": 0, 00:11:08.287 "current_admin_qpairs": 0, 00:11:08.287 "current_io_qpairs": 0, 00:11:08.287 "pending_bdev_io": 0, 00:11:08.287 "completed_nvme_io": 0, 00:11:08.287 "transports": [ 00:11:08.287 { 00:11:08.287 "trtype": "TCP" 00:11:08.287 } 00:11:08.287 ] 00:11:08.287 } 00:11:08.287 ] 00:11:08.287 }' 00:11:08.287 06:06:14 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:08.287 06:06:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:08.287 06:06:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:08.287 06:06:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:08.287 06:06:14 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:08.287 06:06:14 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:08.287 06:06:14 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:08.287 06:06:14 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:08.287 06:06:14 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:08.287 06:06:14 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:08.287 06:06:14 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:08.287 06:06:14 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:08.287 06:06:14 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:08.287 06:06:14 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:08.287 06:06:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:08.287 06:06:14 -- common/autotest_common.sh@10 -- # set +x 00:11:08.287 Malloc1 00:11:08.287 06:06:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:08.287 06:06:14 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:08.287 06:06:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:08.287 06:06:14 -- common/autotest_common.sh@10 -- # set +x 00:11:08.287 06:06:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:08.287 06:06:14 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:08.287 06:06:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:08.287 06:06:14 -- common/autotest_common.sh@10 -- # set +x 00:11:08.287 06:06:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:08.287 06:06:14 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:08.287 06:06:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:08.287 06:06:14 -- common/autotest_common.sh@10 -- # set +x 00:11:08.287 06:06:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:08.287 06:06:14 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.287 06:06:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:08.287 06:06:14 -- common/autotest_common.sh@10 -- # set +x 00:11:08.287 [2024-07-13 06:06:14.712509] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.287 06:06:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:08.287 06:06:14 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:08.287 06:06:14 -- common/autotest_common.sh@640 -- # local es=0 00:11:08.287 06:06:14 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:08.287 06:06:14 -- common/autotest_common.sh@628 -- # local arg=nvme 00:11:08.287 06:06:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:08.287 06:06:14 -- common/autotest_common.sh@632 -- # type -t nvme 00:11:08.287 06:06:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:08.287 06:06:14 -- common/autotest_common.sh@634 -- # type -P nvme 00:11:08.287 06:06:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:08.287 06:06:14 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:11:08.287 06:06:14 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:11:08.287 06:06:14 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:08.287 [2024-07-13 06:06:14.734974] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:08.287 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:08.287 could not add new controller: failed to write to nvme-fabrics device 00:11:08.287 06:06:14 -- common/autotest_common.sh@643 -- # es=1 00:11:08.287 06:06:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:08.287 06:06:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:08.287 06:06:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:08.287 06:06:14 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:08.287 06:06:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:08.287 06:06:14 -- common/autotest_common.sh@10 -- # set +x 00:11:08.287 06:06:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:08.287 06:06:14 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:08.850 06:06:15 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:09.106 06:06:15 -- common/autotest_common.sh@1177 -- # local i=0 00:11:09.106 06:06:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.106 06:06:15 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:09.106 06:06:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:10.996 06:06:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:10.996 06:06:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:10.996 06:06:17 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:10.996 06:06:17 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:10.996 06:06:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:10.996 06:06:17 -- common/autotest_common.sh@1187 -- # return 0 00:11:10.996 06:06:17 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.996 06:06:17 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:10.996 06:06:17 -- common/autotest_common.sh@1198 -- # local i=0 00:11:10.996 06:06:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:10.996 06:06:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.996 06:06:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:10.996 06:06:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.996 06:06:17 -- common/autotest_common.sh@1210 -- # return 0 00:11:10.996 06:06:17 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:10.996 06:06:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:10.996 06:06:17 -- common/autotest_common.sh@10 -- # set +x 00:11:10.996 06:06:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:10.996 06:06:17 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:10.996 06:06:17 -- common/autotest_common.sh@640 -- # local es=0 00:11:10.996 06:06:17 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:10.996 06:06:17 -- common/autotest_common.sh@628 -- # local arg=nvme 00:11:10.996 06:06:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:10.996 06:06:17 -- common/autotest_common.sh@632 -- # type -t nvme 00:11:10.996 06:06:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:10.996 06:06:17 -- common/autotest_common.sh@634 -- # type -P nvme 00:11:10.996 06:06:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:10.996 06:06:17 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:11:10.996 06:06:17 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:11:10.996 06:06:17 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:10.996 [2024-07-13 06:06:17.505470] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:11.253 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:11.253 could not add new controller: failed to write to nvme-fabrics device 00:11:11.253 06:06:17 -- common/autotest_common.sh@643 -- # es=1 00:11:11.253 06:06:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:11.253 06:06:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:11.253 06:06:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:11.253 06:06:17 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:11.253 06:06:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:11.253 06:06:17 -- common/autotest_common.sh@10 -- # set +x 00:11:11.253 06:06:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:11.253 06:06:17 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:11.819 06:06:18 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:11.819 06:06:18 -- common/autotest_common.sh@1177 -- # local i=0 00:11:11.819 06:06:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:11.819 06:06:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:11.819 06:06:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:13.717 06:06:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:13.717 06:06:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:13.717 06:06:20 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.975 06:06:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:13.975 06:06:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.975 06:06:20 -- common/autotest_common.sh@1187 -- # return 0 00:11:13.975 06:06:20 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:13.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.975 06:06:20 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:13.975 06:06:20 -- common/autotest_common.sh@1198 -- # local i=0 00:11:13.975 06:06:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:13.975 06:06:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.975 06:06:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:13.975 06:06:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.975 06:06:20 -- common/autotest_common.sh@1210 -- # return 0 00:11:13.975 06:06:20 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.975 06:06:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:13.975 06:06:20 -- common/autotest_common.sh@10 -- # set +x 00:11:13.975 06:06:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:13.975 06:06:20 -- target/rpc.sh@81 -- # seq 1 5 00:11:13.975 06:06:20 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:13.975 06:06:20 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:13.975 06:06:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:13.975 06:06:20 -- common/autotest_common.sh@10 -- # set +x 00:11:13.975 06:06:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:13.975 06:06:20 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.975 06:06:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:13.975 06:06:20 -- common/autotest_common.sh@10 -- # set +x 00:11:13.975 [2024-07-13 06:06:20.373941] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.975 06:06:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:13.975 06:06:20 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:13.975 06:06:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:13.975 06:06:20 -- common/autotest_common.sh@10 -- # set +x 00:11:13.975 06:06:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:13.975 06:06:20 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:13.975 06:06:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:13.975 06:06:20 -- common/autotest_common.sh@10 -- # set +x 00:11:13.975 06:06:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:13.975 06:06:20 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:14.909 06:06:21 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:14.909 06:06:21 -- common/autotest_common.sh@1177 -- # local i=0 00:11:14.909 06:06:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:14.909 06:06:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:14.909 06:06:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:16.812 06:06:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:16.812 06:06:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:16.812 06:06:23 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:16.812 06:06:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:16.812 06:06:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:16.812 06:06:23 -- common/autotest_common.sh@1187 -- # return 0 00:11:16.812 06:06:23 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:16.812 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.812 06:06:23 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:16.812 06:06:23 -- common/autotest_common.sh@1198 -- # local i=0 00:11:16.812 06:06:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:16.812 06:06:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.812 06:06:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:16.812 06:06:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:16.812 06:06:23 -- common/autotest_common.sh@1210 -- # return 0 00:11:16.812 06:06:23 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:16.812 06:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.812 06:06:23 -- common/autotest_common.sh@10 -- # set +x 00:11:16.812 06:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.812 06:06:23 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.812 06:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.812 06:06:23 -- common/autotest_common.sh@10 -- # set +x 00:11:16.812 06:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.812 06:06:23 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:16.812 06:06:23 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:16.812 06:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.812 06:06:23 -- common/autotest_common.sh@10 -- # set +x 00:11:16.812 06:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.812 06:06:23 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.812 06:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.812 06:06:23 -- common/autotest_common.sh@10 -- # set +x 00:11:16.812 [2024-07-13 06:06:23.234244] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.812 06:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.812 06:06:23 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:16.812 06:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.812 06:06:23 -- common/autotest_common.sh@10 -- # set +x 00:11:16.812 06:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.812 06:06:23 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:16.812 06:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:16.812 06:06:23 -- common/autotest_common.sh@10 -- # set +x 00:11:16.812 06:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:16.812 06:06:23 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:17.745 06:06:23 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:17.745 06:06:23 -- common/autotest_common.sh@1177 -- # local i=0 00:11:17.745 06:06:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:17.745 06:06:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:17.745 06:06:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:19.644 06:06:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:19.644 06:06:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:19.644 06:06:25 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.644 06:06:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:19.644 06:06:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.644 06:06:25 -- common/autotest_common.sh@1187 -- # return 0 00:11:19.644 06:06:25 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:19.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.644 06:06:26 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:19.644 06:06:26 -- common/autotest_common.sh@1198 -- # local i=0 00:11:19.644 06:06:26 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:19.644 06:06:26 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.644 06:06:26 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:19.644 06:06:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.644 06:06:26 -- common/autotest_common.sh@1210 -- # return 0 00:11:19.644 06:06:26 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:19.644 06:06:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:19.644 06:06:26 -- common/autotest_common.sh@10 -- # set +x 00:11:19.644 06:06:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:19.644 06:06:26 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.644 06:06:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:19.644 06:06:26 -- common/autotest_common.sh@10 -- # set +x 00:11:19.644 06:06:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:19.644 06:06:26 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:19.644 06:06:26 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:19.644 06:06:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:19.644 06:06:26 -- common/autotest_common.sh@10 -- # set +x 00:11:19.644 06:06:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:19.644 06:06:26 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.644 06:06:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:19.644 06:06:26 -- common/autotest_common.sh@10 -- # set +x 00:11:19.644 [2024-07-13 06:06:26.062883] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.644 06:06:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:19.644 06:06:26 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:19.644 06:06:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:19.644 06:06:26 -- common/autotest_common.sh@10 -- # set +x 00:11:19.644 06:06:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:19.644 06:06:26 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:19.644 06:06:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:19.644 06:06:26 -- common/autotest_common.sh@10 -- # set +x 00:11:19.644 06:06:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:19.644 06:06:26 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:20.580 06:06:26 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:20.580 06:06:26 -- common/autotest_common.sh@1177 -- # local i=0 00:11:20.580 06:06:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:20.580 06:06:26 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:20.580 06:06:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:22.479 06:06:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:22.479 06:06:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:22.479 06:06:28 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:22.479 06:06:28 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:22.479 06:06:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:22.479 06:06:28 -- common/autotest_common.sh@1187 -- # return 0 00:11:22.479 06:06:28 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:22.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.479 06:06:28 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:22.479 06:06:28 -- common/autotest_common.sh@1198 -- # local i=0 00:11:22.479 06:06:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:22.479 06:06:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:22.479 06:06:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:22.479 06:06:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:22.479 06:06:28 -- common/autotest_common.sh@1210 -- # return 0 00:11:22.479 06:06:28 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:22.479 06:06:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:22.479 06:06:28 -- common/autotest_common.sh@10 -- # set +x 00:11:22.479 06:06:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:22.479 06:06:28 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:22.479 06:06:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:22.479 06:06:28 -- common/autotest_common.sh@10 -- # set +x 00:11:22.479 06:06:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:22.479 06:06:28 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:22.479 06:06:28 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:22.479 06:06:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:22.479 06:06:28 -- common/autotest_common.sh@10 -- # set +x 00:11:22.479 06:06:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:22.479 06:06:28 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.479 06:06:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:22.479 06:06:28 -- common/autotest_common.sh@10 -- # set +x 00:11:22.479 [2024-07-13 06:06:28.875723] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.479 06:06:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:22.479 06:06:28 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:22.479 06:06:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:22.479 06:06:28 -- common/autotest_common.sh@10 -- # set +x 00:11:22.479 06:06:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:22.479 06:06:28 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:22.479 06:06:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:22.479 06:06:28 -- common/autotest_common.sh@10 -- # set +x 00:11:22.479 06:06:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:22.479 06:06:28 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:23.046 06:06:29 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:23.046 06:06:29 -- common/autotest_common.sh@1177 -- # local i=0 00:11:23.046 06:06:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:23.046 06:06:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:23.046 06:06:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:25.575 06:06:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:25.575 06:06:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:25.575 06:06:31 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.575 06:06:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:25.575 06:06:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.575 06:06:31 -- common/autotest_common.sh@1187 -- # return 0 00:11:25.575 06:06:31 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.575 06:06:31 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:25.575 06:06:31 -- common/autotest_common.sh@1198 -- # local i=0 00:11:25.575 06:06:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:25.575 06:06:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.575 06:06:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:25.575 06:06:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.575 06:06:31 -- common/autotest_common.sh@1210 -- # return 0 00:11:25.575 06:06:31 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:25.575 06:06:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:25.575 06:06:31 -- common/autotest_common.sh@10 -- # set +x 00:11:25.575 06:06:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:25.575 06:06:31 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.575 06:06:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:25.575 06:06:31 -- common/autotest_common.sh@10 -- # set +x 00:11:25.575 06:06:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:25.575 06:06:31 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:25.575 06:06:31 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:25.575 06:06:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:25.575 06:06:31 -- common/autotest_common.sh@10 -- # set +x 00:11:25.575 06:06:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:25.575 06:06:31 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.575 06:06:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:25.575 06:06:31 -- common/autotest_common.sh@10 -- # set +x 00:11:25.575 [2024-07-13 06:06:31.646379] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.575 06:06:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:25.575 06:06:31 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:25.575 06:06:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:25.575 06:06:31 -- common/autotest_common.sh@10 -- # set +x 00:11:25.575 06:06:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:25.575 06:06:31 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:25.575 06:06:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:25.575 06:06:31 -- common/autotest_common.sh@10 -- # set +x 00:11:25.575 06:06:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:25.575 06:06:31 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:26.141 06:06:32 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:26.141 06:06:32 -- common/autotest_common.sh@1177 -- # local i=0 00:11:26.141 06:06:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:26.141 06:06:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:26.141 06:06:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:28.037 06:06:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:28.037 06:06:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:28.037 06:06:34 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:28.037 06:06:34 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:28.037 06:06:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:28.037 06:06:34 -- common/autotest_common.sh@1187 -- # return 0 00:11:28.037 06:06:34 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:28.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.037 06:06:34 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:28.037 06:06:34 -- common/autotest_common.sh@1198 -- # local i=0 00:11:28.037 06:06:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:28.037 06:06:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.037 06:06:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:28.037 06:06:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:28.037 06:06:34 -- common/autotest_common.sh@1210 -- # return 0 00:11:28.037 06:06:34 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:28.037 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.037 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.037 06:06:34 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.037 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.037 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.037 06:06:34 -- target/rpc.sh@99 -- # seq 1 5 00:11:28.037 06:06:34 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:28.037 06:06:34 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:28.037 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.037 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.037 06:06:34 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.037 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.037 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 [2024-07-13 06:06:34.469444] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.037 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.037 06:06:34 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:28.037 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.037 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.037 06:06:34 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:28.037 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.037 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.037 06:06:34 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.037 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.037 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.037 06:06:34 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.037 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.037 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.037 06:06:34 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:28.037 06:06:34 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:28.037 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.037 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.037 06:06:34 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.037 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.037 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 [2024-07-13 06:06:34.517535] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.037 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.037 06:06:34 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:28.037 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.037 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.037 06:06:34 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:28.037 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.037 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.037 06:06:34 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.037 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.037 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.037 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.037 06:06:34 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.037 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.037 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.296 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.296 06:06:34 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:28.296 06:06:34 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:28.296 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.296 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.296 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.296 06:06:34 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.296 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.296 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.296 [2024-07-13 06:06:34.565707] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.296 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.296 06:06:34 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:28.296 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.296 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.296 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.296 06:06:34 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:28.296 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.296 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.296 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.296 06:06:34 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.296 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.296 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.296 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.296 06:06:34 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.296 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.296 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.296 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.296 06:06:34 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:28.296 06:06:34 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:28.296 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.296 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.296 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.296 06:06:34 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.296 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.296 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.296 [2024-07-13 06:06:34.613893] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.296 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.296 06:06:34 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:28.296 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.296 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.296 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.296 06:06:34 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:28.296 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.296 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.296 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.296 06:06:34 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.296 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.296 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.296 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.296 06:06:34 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.296 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.296 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.296 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.296 06:06:34 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:28.296 06:06:34 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:28.296 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.296 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.296 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.296 06:06:34 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.296 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.296 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.296 [2024-07-13 06:06:34.662050] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.296 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.296 06:06:34 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:28.296 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.296 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.296 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.296 06:06:34 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:28.296 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.297 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.297 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.297 06:06:34 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.297 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.297 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.297 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.297 06:06:34 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.297 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.297 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.297 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.297 06:06:34 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:28.297 06:06:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:28.297 06:06:34 -- common/autotest_common.sh@10 -- # set +x 00:11:28.297 06:06:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:28.297 06:06:34 -- target/rpc.sh@110 -- # stats='{ 00:11:28.297 "tick_rate": 2700000000, 00:11:28.297 "poll_groups": [ 00:11:28.297 { 00:11:28.297 "name": "nvmf_tgt_poll_group_0", 00:11:28.297 "admin_qpairs": 2, 00:11:28.297 "io_qpairs": 84, 00:11:28.297 "current_admin_qpairs": 0, 00:11:28.297 "current_io_qpairs": 0, 00:11:28.297 "pending_bdev_io": 0, 00:11:28.297 "completed_nvme_io": 101, 00:11:28.297 "transports": [ 00:11:28.297 { 00:11:28.297 "trtype": "TCP" 00:11:28.297 } 00:11:28.297 ] 00:11:28.297 }, 00:11:28.297 { 00:11:28.297 "name": "nvmf_tgt_poll_group_1", 00:11:28.297 "admin_qpairs": 2, 00:11:28.297 "io_qpairs": 84, 00:11:28.297 "current_admin_qpairs": 0, 00:11:28.297 "current_io_qpairs": 0, 00:11:28.297 "pending_bdev_io": 0, 00:11:28.297 "completed_nvme_io": 183, 00:11:28.297 "transports": [ 00:11:28.297 { 00:11:28.297 "trtype": "TCP" 00:11:28.297 } 00:11:28.297 ] 00:11:28.297 }, 00:11:28.297 { 00:11:28.297 "name": "nvmf_tgt_poll_group_2", 00:11:28.297 "admin_qpairs": 1, 00:11:28.297 "io_qpairs": 84, 00:11:28.297 "current_admin_qpairs": 0, 00:11:28.297 "current_io_qpairs": 0, 00:11:28.297 "pending_bdev_io": 0, 00:11:28.297 "completed_nvme_io": 266, 00:11:28.297 "transports": [ 00:11:28.297 { 00:11:28.297 "trtype": "TCP" 00:11:28.297 } 00:11:28.297 ] 00:11:28.297 }, 00:11:28.297 { 00:11:28.297 "name": "nvmf_tgt_poll_group_3", 00:11:28.297 "admin_qpairs": 2, 00:11:28.297 "io_qpairs": 84, 00:11:28.297 "current_admin_qpairs": 0, 00:11:28.297 "current_io_qpairs": 0, 00:11:28.297 "pending_bdev_io": 0, 00:11:28.297 "completed_nvme_io": 136, 00:11:28.297 "transports": [ 00:11:28.297 { 00:11:28.297 "trtype": "TCP" 00:11:28.297 } 00:11:28.297 ] 00:11:28.297 } 00:11:28.297 ] 00:11:28.297 }' 00:11:28.297 06:06:34 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:28.297 06:06:34 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:28.297 06:06:34 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:28.297 06:06:34 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:28.297 06:06:34 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:28.297 06:06:34 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:28.297 06:06:34 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:28.297 06:06:34 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:28.297 06:06:34 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:28.297 06:06:34 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:28.297 06:06:34 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:28.297 06:06:34 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:28.297 06:06:34 -- target/rpc.sh@123 -- # nvmftestfini 00:11:28.297 06:06:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:28.297 06:06:34 -- nvmf/common.sh@116 -- # sync 00:11:28.297 06:06:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:28.297 06:06:34 -- nvmf/common.sh@119 -- # set +e 00:11:28.297 06:06:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:28.297 06:06:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:28.554 rmmod nvme_tcp 00:11:28.554 rmmod nvme_fabrics 00:11:28.554 rmmod nvme_keyring 00:11:28.554 06:06:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:28.554 06:06:34 -- nvmf/common.sh@123 -- # set -e 00:11:28.554 06:06:34 -- nvmf/common.sh@124 -- # return 0 00:11:28.554 06:06:34 -- nvmf/common.sh@477 -- # '[' -n 1061962 ']' 00:11:28.554 06:06:34 -- nvmf/common.sh@478 -- # killprocess 1061962 00:11:28.554 06:06:34 -- common/autotest_common.sh@926 -- # '[' -z 1061962 ']' 00:11:28.555 06:06:34 -- common/autotest_common.sh@930 -- # kill -0 1061962 00:11:28.555 06:06:34 -- common/autotest_common.sh@931 -- # uname 00:11:28.555 06:06:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:28.555 06:06:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1061962 00:11:28.555 06:06:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:28.555 06:06:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:28.555 06:06:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1061962' 00:11:28.555 killing process with pid 1061962 00:11:28.555 06:06:34 -- common/autotest_common.sh@945 -- # kill 1061962 00:11:28.555 06:06:34 -- common/autotest_common.sh@950 -- # wait 1061962 00:11:28.813 06:06:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:28.813 06:06:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:28.813 06:06:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:28.813 06:06:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:28.813 06:06:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:28.813 06:06:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.813 06:06:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.813 06:06:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.349 06:06:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:31.349 00:11:31.349 real 0m26.139s 00:11:31.349 user 1m25.662s 00:11:31.349 sys 0m4.136s 00:11:31.349 06:06:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:31.349 06:06:37 -- common/autotest_common.sh@10 -- # set +x 00:11:31.349 ************************************ 00:11:31.349 END TEST nvmf_rpc 00:11:31.349 ************************************ 00:11:31.349 06:06:37 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:31.349 06:06:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:31.349 06:06:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:31.349 06:06:37 -- common/autotest_common.sh@10 -- # set +x 00:11:31.349 ************************************ 00:11:31.349 START TEST nvmf_invalid 00:11:31.349 ************************************ 00:11:31.349 06:06:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:31.349 * Looking for test storage... 00:11:31.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.349 06:06:37 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.349 06:06:37 -- nvmf/common.sh@7 -- # uname -s 00:11:31.349 06:06:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.349 06:06:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.349 06:06:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.349 06:06:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.349 06:06:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.349 06:06:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.349 06:06:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.349 06:06:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.349 06:06:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.349 06:06:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.349 06:06:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:31.349 06:06:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:31.349 06:06:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.349 06:06:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.349 06:06:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.349 06:06:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.349 06:06:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.349 06:06:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.349 06:06:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.349 06:06:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.349 06:06:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.349 06:06:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.349 06:06:37 -- paths/export.sh@5 -- # export PATH 00:11:31.349 06:06:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.349 06:06:37 -- nvmf/common.sh@46 -- # : 0 00:11:31.349 06:06:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:31.349 06:06:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:31.349 06:06:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:31.349 06:06:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.350 06:06:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.350 06:06:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:31.350 06:06:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:31.350 06:06:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:31.350 06:06:37 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:31.350 06:06:37 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:31.350 06:06:37 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:31.350 06:06:37 -- target/invalid.sh@14 -- # target=foobar 00:11:31.350 06:06:37 -- target/invalid.sh@16 -- # RANDOM=0 00:11:31.350 06:06:37 -- target/invalid.sh@34 -- # nvmftestinit 00:11:31.350 06:06:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:31.350 06:06:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.350 06:06:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:31.350 06:06:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:31.350 06:06:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:31.350 06:06:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.350 06:06:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:31.350 06:06:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.350 06:06:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:31.350 06:06:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:31.350 06:06:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:31.350 06:06:37 -- common/autotest_common.sh@10 -- # set +x 00:11:33.307 06:06:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:33.307 06:06:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:33.307 06:06:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:33.307 06:06:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:33.308 06:06:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:33.308 06:06:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:33.308 06:06:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:33.308 06:06:39 -- nvmf/common.sh@294 -- # net_devs=() 00:11:33.308 06:06:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:33.308 06:06:39 -- nvmf/common.sh@295 -- # e810=() 00:11:33.308 06:06:39 -- nvmf/common.sh@295 -- # local -ga e810 00:11:33.308 06:06:39 -- nvmf/common.sh@296 -- # x722=() 00:11:33.308 06:06:39 -- nvmf/common.sh@296 -- # local -ga x722 00:11:33.308 06:06:39 -- nvmf/common.sh@297 -- # mlx=() 00:11:33.308 06:06:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:33.308 06:06:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.308 06:06:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.308 06:06:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.308 06:06:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.308 06:06:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.308 06:06:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.308 06:06:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.308 06:06:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.308 06:06:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.308 06:06:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.308 06:06:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.308 06:06:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:33.308 06:06:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:33.308 06:06:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:33.308 06:06:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:33.308 06:06:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:33.308 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:33.308 06:06:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:33.308 06:06:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:33.308 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:33.308 06:06:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:33.308 06:06:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:33.308 06:06:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.308 06:06:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:33.308 06:06:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.308 06:06:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:33.308 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:33.308 06:06:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.308 06:06:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:33.308 06:06:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.308 06:06:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:33.308 06:06:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.308 06:06:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:33.308 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:33.308 06:06:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.308 06:06:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:33.308 06:06:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:33.308 06:06:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:33.308 06:06:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.308 06:06:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.308 06:06:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.308 06:06:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:33.308 06:06:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.308 06:06:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.308 06:06:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:33.308 06:06:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.308 06:06:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.308 06:06:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:33.308 06:06:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:33.308 06:06:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.308 06:06:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.308 06:06:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.308 06:06:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.308 06:06:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:33.308 06:06:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.308 06:06:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.308 06:06:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.308 06:06:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:33.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:11:33.308 00:11:33.308 --- 10.0.0.2 ping statistics --- 00:11:33.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.308 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:11:33.308 06:06:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:11:33.308 00:11:33.308 --- 10.0.0.1 ping statistics --- 00:11:33.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.308 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:11:33.308 06:06:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.308 06:06:39 -- nvmf/common.sh@410 -- # return 0 00:11:33.308 06:06:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:33.308 06:06:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.308 06:06:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:33.308 06:06:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.308 06:06:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:33.308 06:06:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:33.308 06:06:39 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:33.308 06:06:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:33.308 06:06:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:33.308 06:06:39 -- common/autotest_common.sh@10 -- # set +x 00:11:33.308 06:06:39 -- nvmf/common.sh@469 -- # nvmfpid=1066683 00:11:33.308 06:06:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.308 06:06:39 -- nvmf/common.sh@470 -- # waitforlisten 1066683 00:11:33.308 06:06:39 -- common/autotest_common.sh@819 -- # '[' -z 1066683 ']' 00:11:33.308 06:06:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.308 06:06:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:33.308 06:06:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.308 06:06:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:33.308 06:06:39 -- common/autotest_common.sh@10 -- # set +x 00:11:33.309 [2024-07-13 06:06:39.573880] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:33.309 [2024-07-13 06:06:39.573948] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.309 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.309 [2024-07-13 06:06:39.646193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.309 [2024-07-13 06:06:39.764777] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:33.309 [2024-07-13 06:06:39.764943] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.309 [2024-07-13 06:06:39.764964] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.309 [2024-07-13 06:06:39.764979] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.309 [2024-07-13 06:06:39.765053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.309 [2024-07-13 06:06:39.765078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.309 [2024-07-13 06:06:39.765131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.309 [2024-07-13 06:06:39.765134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.240 06:06:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:34.240 06:06:40 -- common/autotest_common.sh@852 -- # return 0 00:11:34.240 06:06:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:34.240 06:06:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:34.240 06:06:40 -- common/autotest_common.sh@10 -- # set +x 00:11:34.240 06:06:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.240 06:06:40 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:34.240 06:06:40 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18509 00:11:34.497 [2024-07-13 06:06:40.770020] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:34.497 06:06:40 -- target/invalid.sh@40 -- # out='request: 00:11:34.497 { 00:11:34.497 "nqn": "nqn.2016-06.io.spdk:cnode18509", 00:11:34.497 "tgt_name": "foobar", 00:11:34.497 "method": "nvmf_create_subsystem", 00:11:34.497 "req_id": 1 00:11:34.497 } 00:11:34.497 Got JSON-RPC error response 00:11:34.497 response: 00:11:34.497 { 00:11:34.497 "code": -32603, 00:11:34.497 "message": "Unable to find target foobar" 00:11:34.497 }' 00:11:34.497 06:06:40 -- target/invalid.sh@41 -- # [[ request: 00:11:34.497 { 00:11:34.497 "nqn": "nqn.2016-06.io.spdk:cnode18509", 00:11:34.497 "tgt_name": "foobar", 00:11:34.497 "method": "nvmf_create_subsystem", 00:11:34.497 "req_id": 1 00:11:34.497 } 00:11:34.497 Got JSON-RPC error response 00:11:34.497 response: 00:11:34.497 { 00:11:34.497 "code": -32603, 00:11:34.497 "message": "Unable to find target foobar" 00:11:34.497 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:34.497 06:06:40 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:34.497 06:06:40 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode30284 00:11:34.497 [2024-07-13 06:06:41.006856] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30284: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:34.755 06:06:41 -- target/invalid.sh@45 -- # out='request: 00:11:34.755 { 00:11:34.755 "nqn": "nqn.2016-06.io.spdk:cnode30284", 00:11:34.755 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:34.755 "method": "nvmf_create_subsystem", 00:11:34.755 "req_id": 1 00:11:34.755 } 00:11:34.755 Got JSON-RPC error response 00:11:34.755 response: 00:11:34.755 { 00:11:34.755 "code": -32602, 00:11:34.755 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:34.755 }' 00:11:34.755 06:06:41 -- target/invalid.sh@46 -- # [[ request: 00:11:34.755 { 00:11:34.755 "nqn": "nqn.2016-06.io.spdk:cnode30284", 00:11:34.755 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:34.755 "method": "nvmf_create_subsystem", 00:11:34.755 "req_id": 1 00:11:34.755 } 00:11:34.755 Got JSON-RPC error response 00:11:34.755 response: 00:11:34.755 { 00:11:34.755 "code": -32602, 00:11:34.755 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:34.755 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:34.755 06:06:41 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:34.755 06:06:41 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6090 00:11:34.755 [2024-07-13 06:06:41.263642] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6090: invalid model number 'SPDK_Controller' 00:11:35.012 06:06:41 -- target/invalid.sh@50 -- # out='request: 00:11:35.012 { 00:11:35.012 "nqn": "nqn.2016-06.io.spdk:cnode6090", 00:11:35.012 "model_number": "SPDK_Controller\u001f", 00:11:35.012 "method": "nvmf_create_subsystem", 00:11:35.012 "req_id": 1 00:11:35.012 } 00:11:35.012 Got JSON-RPC error response 00:11:35.012 response: 00:11:35.012 { 00:11:35.012 "code": -32602, 00:11:35.012 "message": "Invalid MN SPDK_Controller\u001f" 00:11:35.012 }' 00:11:35.012 06:06:41 -- target/invalid.sh@51 -- # [[ request: 00:11:35.012 { 00:11:35.012 "nqn": "nqn.2016-06.io.spdk:cnode6090", 00:11:35.012 "model_number": "SPDK_Controller\u001f", 00:11:35.012 "method": "nvmf_create_subsystem", 00:11:35.012 "req_id": 1 00:11:35.012 } 00:11:35.012 Got JSON-RPC error response 00:11:35.012 response: 00:11:35.012 { 00:11:35.012 "code": -32602, 00:11:35.012 "message": "Invalid MN SPDK_Controller\u001f" 00:11:35.012 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:35.012 06:06:41 -- target/invalid.sh@54 -- # gen_random_s 21 00:11:35.012 06:06:41 -- target/invalid.sh@19 -- # local length=21 ll 00:11:35.012 06:06:41 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:35.012 06:06:41 -- target/invalid.sh@21 -- # local chars 00:11:35.012 06:06:41 -- target/invalid.sh@22 -- # local string 00:11:35.012 06:06:41 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:35.012 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # printf %x 80 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # string+=P 00:11:35.012 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.012 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # printf %x 47 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # string+=/ 00:11:35.012 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.012 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # printf %x 64 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # string+=@ 00:11:35.012 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.012 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # printf %x 75 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # string+=K 00:11:35.012 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.012 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # printf %x 66 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # string+=B 00:11:35.012 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.012 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # printf %x 113 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # string+=q 00:11:35.012 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.012 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # printf %x 99 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # string+=c 00:11:35.012 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.012 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # printf %x 119 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:35.012 06:06:41 -- target/invalid.sh@25 -- # string+=w 00:11:35.012 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # printf %x 79 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # string+=O 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # printf %x 88 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # string+=X 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # printf %x 117 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # string+=u 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # printf %x 61 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # string+== 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # printf %x 39 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # string+=\' 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # printf %x 39 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # string+=\' 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # printf %x 57 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # string+=9 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # printf %x 122 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # string+=z 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # printf %x 119 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # string+=w 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # printf %x 40 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # string+='(' 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # printf %x 36 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # string+='$' 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # printf %x 64 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # string+=@ 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # printf %x 68 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:35.013 06:06:41 -- target/invalid.sh@25 -- # string+=D 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.013 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.013 06:06:41 -- target/invalid.sh@28 -- # [[ P == \- ]] 00:11:35.013 06:06:41 -- target/invalid.sh@31 -- # echo 'P/@KBqcwOXu='\'''\''9zw($@D' 00:11:35.013 06:06:41 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'P/@KBqcwOXu='\'''\''9zw($@D' nqn.2016-06.io.spdk:cnode17732 00:11:35.271 [2024-07-13 06:06:41.580687] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17732: invalid serial number 'P/@KBqcwOXu=''9zw($@D' 00:11:35.271 06:06:41 -- target/invalid.sh@54 -- # out='request: 00:11:35.271 { 00:11:35.271 "nqn": "nqn.2016-06.io.spdk:cnode17732", 00:11:35.271 "serial_number": "P/@KBqcwOXu='\'''\''9zw($@D", 00:11:35.271 "method": "nvmf_create_subsystem", 00:11:35.271 "req_id": 1 00:11:35.271 } 00:11:35.271 Got JSON-RPC error response 00:11:35.271 response: 00:11:35.271 { 00:11:35.271 "code": -32602, 00:11:35.271 "message": "Invalid SN P/@KBqcwOXu='\'''\''9zw($@D" 00:11:35.271 }' 00:11:35.271 06:06:41 -- target/invalid.sh@55 -- # [[ request: 00:11:35.271 { 00:11:35.271 "nqn": "nqn.2016-06.io.spdk:cnode17732", 00:11:35.271 "serial_number": "P/@KBqcwOXu=''9zw($@D", 00:11:35.271 "method": "nvmf_create_subsystem", 00:11:35.271 "req_id": 1 00:11:35.271 } 00:11:35.271 Got JSON-RPC error response 00:11:35.271 response: 00:11:35.271 { 00:11:35.271 "code": -32602, 00:11:35.271 "message": "Invalid SN P/@KBqcwOXu=''9zw($@D" 00:11:35.271 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:35.271 06:06:41 -- target/invalid.sh@58 -- # gen_random_s 41 00:11:35.271 06:06:41 -- target/invalid.sh@19 -- # local length=41 ll 00:11:35.271 06:06:41 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:35.271 06:06:41 -- target/invalid.sh@21 -- # local chars 00:11:35.271 06:06:41 -- target/invalid.sh@22 -- # local string 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 39 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=\' 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 56 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=8 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 52 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=4 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 120 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=x 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 45 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=- 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 46 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=. 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 77 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=M 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 103 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=g 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 73 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=I 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 46 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=. 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 116 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=t 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 114 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=r 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 81 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=Q 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 84 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=T 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 41 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=')' 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 52 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=4 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 47 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=/ 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 47 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=/ 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # printf %x 84 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:35.271 06:06:41 -- target/invalid.sh@25 -- # string+=T 00:11:35.271 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 72 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=H 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 75 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=K 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 77 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=M 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 74 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=J 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 79 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=O 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 115 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=s 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 123 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+='{' 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 82 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=R 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 47 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=/ 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 70 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=F 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 90 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=Z 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 82 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=R 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 35 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+='#' 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 54 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=6 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 106 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=j 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 63 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+='?' 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 57 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=9 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 108 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=l 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 89 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=Y 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 41 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=')' 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 90 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=Z 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # printf %x 89 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:35.272 06:06:41 -- target/invalid.sh@25 -- # string+=Y 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:35.272 06:06:41 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:35.272 06:06:41 -- target/invalid.sh@28 -- # [[ ' == \- ]] 00:11:35.272 06:06:41 -- target/invalid.sh@31 -- # echo ''\''84x-.MgI.trQT)4//THKMJOs{R/FZR#6j?9lY)ZY' 00:11:35.272 06:06:41 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ''\''84x-.MgI.trQT)4//THKMJOs{R/FZR#6j?9lY)ZY' nqn.2016-06.io.spdk:cnode27769 00:11:35.529 [2024-07-13 06:06:41.966020] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27769: invalid model number ''84x-.MgI.trQT)4//THKMJOs{R/FZR#6j?9lY)ZY' 00:11:35.529 06:06:41 -- target/invalid.sh@58 -- # out='request: 00:11:35.529 { 00:11:35.529 "nqn": "nqn.2016-06.io.spdk:cnode27769", 00:11:35.529 "model_number": "'\''84x-.MgI.trQT)4//THKMJOs{R/FZR#6j?9lY)ZY", 00:11:35.529 "method": "nvmf_create_subsystem", 00:11:35.529 "req_id": 1 00:11:35.529 } 00:11:35.529 Got JSON-RPC error response 00:11:35.529 response: 00:11:35.529 { 00:11:35.529 "code": -32602, 00:11:35.529 "message": "Invalid MN '\''84x-.MgI.trQT)4//THKMJOs{R/FZR#6j?9lY)ZY" 00:11:35.529 }' 00:11:35.529 06:06:41 -- target/invalid.sh@59 -- # [[ request: 00:11:35.529 { 00:11:35.529 "nqn": "nqn.2016-06.io.spdk:cnode27769", 00:11:35.529 "model_number": "'84x-.MgI.trQT)4//THKMJOs{R/FZR#6j?9lY)ZY", 00:11:35.529 "method": "nvmf_create_subsystem", 00:11:35.529 "req_id": 1 00:11:35.529 } 00:11:35.529 Got JSON-RPC error response 00:11:35.529 response: 00:11:35.529 { 00:11:35.529 "code": -32602, 00:11:35.529 "message": "Invalid MN '84x-.MgI.trQT)4//THKMJOs{R/FZR#6j?9lY)ZY" 00:11:35.529 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:35.529 06:06:41 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:35.787 [2024-07-13 06:06:42.194904] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.787 06:06:42 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:36.044 06:06:42 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:36.044 06:06:42 -- target/invalid.sh@67 -- # echo '' 00:11:36.044 06:06:42 -- target/invalid.sh@67 -- # head -n 1 00:11:36.044 06:06:42 -- target/invalid.sh@67 -- # IP= 00:11:36.044 06:06:42 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:36.301 [2024-07-13 06:06:42.680490] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:36.301 06:06:42 -- target/invalid.sh@69 -- # out='request: 00:11:36.301 { 00:11:36.301 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:36.301 "listen_address": { 00:11:36.301 "trtype": "tcp", 00:11:36.301 "traddr": "", 00:11:36.301 "trsvcid": "4421" 00:11:36.301 }, 00:11:36.301 "method": "nvmf_subsystem_remove_listener", 00:11:36.301 "req_id": 1 00:11:36.301 } 00:11:36.301 Got JSON-RPC error response 00:11:36.301 response: 00:11:36.301 { 00:11:36.301 "code": -32602, 00:11:36.301 "message": "Invalid parameters" 00:11:36.302 }' 00:11:36.302 06:06:42 -- target/invalid.sh@70 -- # [[ request: 00:11:36.302 { 00:11:36.302 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:36.302 "listen_address": { 00:11:36.302 "trtype": "tcp", 00:11:36.302 "traddr": "", 00:11:36.302 "trsvcid": "4421" 00:11:36.302 }, 00:11:36.302 "method": "nvmf_subsystem_remove_listener", 00:11:36.302 "req_id": 1 00:11:36.302 } 00:11:36.302 Got JSON-RPC error response 00:11:36.302 response: 00:11:36.302 { 00:11:36.302 "code": -32602, 00:11:36.302 "message": "Invalid parameters" 00:11:36.302 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:36.302 06:06:42 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17007 -i 0 00:11:36.559 [2024-07-13 06:06:42.921257] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17007: invalid cntlid range [0-65519] 00:11:36.559 06:06:42 -- target/invalid.sh@73 -- # out='request: 00:11:36.559 { 00:11:36.559 "nqn": "nqn.2016-06.io.spdk:cnode17007", 00:11:36.559 "min_cntlid": 0, 00:11:36.559 "method": "nvmf_create_subsystem", 00:11:36.559 "req_id": 1 00:11:36.559 } 00:11:36.559 Got JSON-RPC error response 00:11:36.559 response: 00:11:36.559 { 00:11:36.559 "code": -32602, 00:11:36.559 "message": "Invalid cntlid range [0-65519]" 00:11:36.559 }' 00:11:36.559 06:06:42 -- target/invalid.sh@74 -- # [[ request: 00:11:36.559 { 00:11:36.559 "nqn": "nqn.2016-06.io.spdk:cnode17007", 00:11:36.559 "min_cntlid": 0, 00:11:36.559 "method": "nvmf_create_subsystem", 00:11:36.559 "req_id": 1 00:11:36.559 } 00:11:36.559 Got JSON-RPC error response 00:11:36.559 response: 00:11:36.559 { 00:11:36.559 "code": -32602, 00:11:36.559 "message": "Invalid cntlid range [0-65519]" 00:11:36.559 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:36.559 06:06:42 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18353 -i 65520 00:11:36.817 [2024-07-13 06:06:43.158057] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18353: invalid cntlid range [65520-65519] 00:11:36.817 06:06:43 -- target/invalid.sh@75 -- # out='request: 00:11:36.817 { 00:11:36.817 "nqn": "nqn.2016-06.io.spdk:cnode18353", 00:11:36.817 "min_cntlid": 65520, 00:11:36.817 "method": "nvmf_create_subsystem", 00:11:36.817 "req_id": 1 00:11:36.817 } 00:11:36.817 Got JSON-RPC error response 00:11:36.817 response: 00:11:36.817 { 00:11:36.817 "code": -32602, 00:11:36.817 "message": "Invalid cntlid range [65520-65519]" 00:11:36.817 }' 00:11:36.817 06:06:43 -- target/invalid.sh@76 -- # [[ request: 00:11:36.817 { 00:11:36.817 "nqn": "nqn.2016-06.io.spdk:cnode18353", 00:11:36.817 "min_cntlid": 65520, 00:11:36.817 "method": "nvmf_create_subsystem", 00:11:36.817 "req_id": 1 00:11:36.817 } 00:11:36.817 Got JSON-RPC error response 00:11:36.817 response: 00:11:36.817 { 00:11:36.817 "code": -32602, 00:11:36.817 "message": "Invalid cntlid range [65520-65519]" 00:11:36.817 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:36.817 06:06:43 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1903 -I 0 00:11:37.075 [2024-07-13 06:06:43.410944] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1903: invalid cntlid range [1-0] 00:11:37.075 06:06:43 -- target/invalid.sh@77 -- # out='request: 00:11:37.075 { 00:11:37.075 "nqn": "nqn.2016-06.io.spdk:cnode1903", 00:11:37.075 "max_cntlid": 0, 00:11:37.075 "method": "nvmf_create_subsystem", 00:11:37.075 "req_id": 1 00:11:37.075 } 00:11:37.075 Got JSON-RPC error response 00:11:37.075 response: 00:11:37.075 { 00:11:37.075 "code": -32602, 00:11:37.075 "message": "Invalid cntlid range [1-0]" 00:11:37.075 }' 00:11:37.075 06:06:43 -- target/invalid.sh@78 -- # [[ request: 00:11:37.075 { 00:11:37.075 "nqn": "nqn.2016-06.io.spdk:cnode1903", 00:11:37.075 "max_cntlid": 0, 00:11:37.075 "method": "nvmf_create_subsystem", 00:11:37.075 "req_id": 1 00:11:37.075 } 00:11:37.075 Got JSON-RPC error response 00:11:37.075 response: 00:11:37.075 { 00:11:37.075 "code": -32602, 00:11:37.075 "message": "Invalid cntlid range [1-0]" 00:11:37.075 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:37.075 06:06:43 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16416 -I 65520 00:11:37.331 [2024-07-13 06:06:43.643706] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16416: invalid cntlid range [1-65520] 00:11:37.331 06:06:43 -- target/invalid.sh@79 -- # out='request: 00:11:37.331 { 00:11:37.331 "nqn": "nqn.2016-06.io.spdk:cnode16416", 00:11:37.331 "max_cntlid": 65520, 00:11:37.331 "method": "nvmf_create_subsystem", 00:11:37.331 "req_id": 1 00:11:37.331 } 00:11:37.331 Got JSON-RPC error response 00:11:37.331 response: 00:11:37.331 { 00:11:37.331 "code": -32602, 00:11:37.331 "message": "Invalid cntlid range [1-65520]" 00:11:37.331 }' 00:11:37.331 06:06:43 -- target/invalid.sh@80 -- # [[ request: 00:11:37.331 { 00:11:37.331 "nqn": "nqn.2016-06.io.spdk:cnode16416", 00:11:37.331 "max_cntlid": 65520, 00:11:37.331 "method": "nvmf_create_subsystem", 00:11:37.331 "req_id": 1 00:11:37.331 } 00:11:37.331 Got JSON-RPC error response 00:11:37.331 response: 00:11:37.331 { 00:11:37.331 "code": -32602, 00:11:37.331 "message": "Invalid cntlid range [1-65520]" 00:11:37.331 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:37.331 06:06:43 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31196 -i 6 -I 5 00:11:37.588 [2024-07-13 06:06:43.884541] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31196: invalid cntlid range [6-5] 00:11:37.588 06:06:43 -- target/invalid.sh@83 -- # out='request: 00:11:37.588 { 00:11:37.588 "nqn": "nqn.2016-06.io.spdk:cnode31196", 00:11:37.588 "min_cntlid": 6, 00:11:37.588 "max_cntlid": 5, 00:11:37.588 "method": "nvmf_create_subsystem", 00:11:37.588 "req_id": 1 00:11:37.588 } 00:11:37.588 Got JSON-RPC error response 00:11:37.588 response: 00:11:37.588 { 00:11:37.588 "code": -32602, 00:11:37.588 "message": "Invalid cntlid range [6-5]" 00:11:37.588 }' 00:11:37.588 06:06:43 -- target/invalid.sh@84 -- # [[ request: 00:11:37.588 { 00:11:37.588 "nqn": "nqn.2016-06.io.spdk:cnode31196", 00:11:37.588 "min_cntlid": 6, 00:11:37.588 "max_cntlid": 5, 00:11:37.588 "method": "nvmf_create_subsystem", 00:11:37.588 "req_id": 1 00:11:37.588 } 00:11:37.588 Got JSON-RPC error response 00:11:37.588 response: 00:11:37.588 { 00:11:37.588 "code": -32602, 00:11:37.588 "message": "Invalid cntlid range [6-5]" 00:11:37.588 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:37.588 06:06:43 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:37.588 06:06:44 -- target/invalid.sh@87 -- # out='request: 00:11:37.588 { 00:11:37.588 "name": "foobar", 00:11:37.588 "method": "nvmf_delete_target", 00:11:37.588 "req_id": 1 00:11:37.588 } 00:11:37.588 Got JSON-RPC error response 00:11:37.588 response: 00:11:37.588 { 00:11:37.588 "code": -32602, 00:11:37.588 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:37.588 }' 00:11:37.589 06:06:44 -- target/invalid.sh@88 -- # [[ request: 00:11:37.589 { 00:11:37.589 "name": "foobar", 00:11:37.589 "method": "nvmf_delete_target", 00:11:37.589 "req_id": 1 00:11:37.589 } 00:11:37.589 Got JSON-RPC error response 00:11:37.589 response: 00:11:37.589 { 00:11:37.589 "code": -32602, 00:11:37.589 "message": "The specified target doesn't exist, cannot delete it." 00:11:37.589 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:37.589 06:06:44 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:37.589 06:06:44 -- target/invalid.sh@91 -- # nvmftestfini 00:11:37.589 06:06:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:37.589 06:06:44 -- nvmf/common.sh@116 -- # sync 00:11:37.589 06:06:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:37.589 06:06:44 -- nvmf/common.sh@119 -- # set +e 00:11:37.589 06:06:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:37.589 06:06:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:37.589 rmmod nvme_tcp 00:11:37.589 rmmod nvme_fabrics 00:11:37.589 rmmod nvme_keyring 00:11:37.589 06:06:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:37.589 06:06:44 -- nvmf/common.sh@123 -- # set -e 00:11:37.589 06:06:44 -- nvmf/common.sh@124 -- # return 0 00:11:37.589 06:06:44 -- nvmf/common.sh@477 -- # '[' -n 1066683 ']' 00:11:37.589 06:06:44 -- nvmf/common.sh@478 -- # killprocess 1066683 00:11:37.589 06:06:44 -- common/autotest_common.sh@926 -- # '[' -z 1066683 ']' 00:11:37.589 06:06:44 -- common/autotest_common.sh@930 -- # kill -0 1066683 00:11:37.589 06:06:44 -- common/autotest_common.sh@931 -- # uname 00:11:37.589 06:06:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:37.589 06:06:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1066683 00:11:37.846 06:06:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:37.846 06:06:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:37.846 06:06:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1066683' 00:11:37.846 killing process with pid 1066683 00:11:37.846 06:06:44 -- common/autotest_common.sh@945 -- # kill 1066683 00:11:37.846 06:06:44 -- common/autotest_common.sh@950 -- # wait 1066683 00:11:38.104 06:06:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:38.104 06:06:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:38.104 06:06:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:38.104 06:06:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:38.104 06:06:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:38.104 06:06:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.104 06:06:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:38.104 06:06:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.019 06:06:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:40.019 00:11:40.019 real 0m9.134s 00:11:40.019 user 0m21.986s 00:11:40.019 sys 0m2.435s 00:11:40.019 06:06:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:40.019 06:06:46 -- common/autotest_common.sh@10 -- # set +x 00:11:40.019 ************************************ 00:11:40.019 END TEST nvmf_invalid 00:11:40.019 ************************************ 00:11:40.019 06:06:46 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:40.019 06:06:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:40.019 06:06:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:40.019 06:06:46 -- common/autotest_common.sh@10 -- # set +x 00:11:40.019 ************************************ 00:11:40.019 START TEST nvmf_abort 00:11:40.019 ************************************ 00:11:40.019 06:06:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:40.019 * Looking for test storage... 00:11:40.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.019 06:06:46 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.019 06:06:46 -- nvmf/common.sh@7 -- # uname -s 00:11:40.019 06:06:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.019 06:06:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.019 06:06:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.019 06:06:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.019 06:06:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.019 06:06:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.019 06:06:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.019 06:06:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.019 06:06:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.019 06:06:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.020 06:06:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:40.020 06:06:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:40.020 06:06:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.020 06:06:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.020 06:06:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.020 06:06:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.020 06:06:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.020 06:06:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.020 06:06:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.020 06:06:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.020 06:06:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.020 06:06:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.020 06:06:46 -- paths/export.sh@5 -- # export PATH 00:11:40.020 06:06:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.020 06:06:46 -- nvmf/common.sh@46 -- # : 0 00:11:40.020 06:06:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:40.020 06:06:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:40.020 06:06:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:40.020 06:06:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.020 06:06:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.020 06:06:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:40.020 06:06:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:40.020 06:06:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:40.020 06:06:46 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:40.020 06:06:46 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:40.020 06:06:46 -- target/abort.sh@14 -- # nvmftestinit 00:11:40.020 06:06:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:40.020 06:06:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.020 06:06:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:40.020 06:06:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:40.020 06:06:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:40.020 06:06:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.020 06:06:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.020 06:06:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.020 06:06:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:40.020 06:06:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:40.020 06:06:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:40.020 06:06:46 -- common/autotest_common.sh@10 -- # set +x 00:11:42.551 06:06:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:42.551 06:06:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:42.551 06:06:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:42.551 06:06:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:42.551 06:06:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:42.551 06:06:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:42.551 06:06:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:42.551 06:06:48 -- nvmf/common.sh@294 -- # net_devs=() 00:11:42.551 06:06:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:42.551 06:06:48 -- nvmf/common.sh@295 -- # e810=() 00:11:42.551 06:06:48 -- nvmf/common.sh@295 -- # local -ga e810 00:11:42.551 06:06:48 -- nvmf/common.sh@296 -- # x722=() 00:11:42.551 06:06:48 -- nvmf/common.sh@296 -- # local -ga x722 00:11:42.551 06:06:48 -- nvmf/common.sh@297 -- # mlx=() 00:11:42.551 06:06:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:42.551 06:06:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.551 06:06:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.551 06:06:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.551 06:06:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.551 06:06:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.551 06:06:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.551 06:06:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.551 06:06:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.551 06:06:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.551 06:06:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.551 06:06:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.551 06:06:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:42.551 06:06:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:42.551 06:06:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:42.551 06:06:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:42.551 06:06:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:42.551 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:42.551 06:06:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:42.551 06:06:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:42.551 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:42.551 06:06:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:42.551 06:06:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:42.551 06:06:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.551 06:06:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:42.551 06:06:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.551 06:06:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:42.551 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:42.551 06:06:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.551 06:06:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:42.551 06:06:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.551 06:06:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:42.551 06:06:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.551 06:06:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:42.551 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:42.551 06:06:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.551 06:06:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:42.551 06:06:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:42.551 06:06:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:42.551 06:06:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.551 06:06:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.551 06:06:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.551 06:06:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:42.551 06:06:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.551 06:06:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.551 06:06:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:42.551 06:06:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.551 06:06:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.551 06:06:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:42.551 06:06:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:42.551 06:06:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.551 06:06:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.551 06:06:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.551 06:06:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.551 06:06:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:42.551 06:06:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.551 06:06:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.551 06:06:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.551 06:06:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:42.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:11:42.551 00:11:42.551 --- 10.0.0.2 ping statistics --- 00:11:42.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.551 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:11:42.551 06:06:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:11:42.551 00:11:42.551 --- 10.0.0.1 ping statistics --- 00:11:42.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.551 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:11:42.551 06:06:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.551 06:06:48 -- nvmf/common.sh@410 -- # return 0 00:11:42.551 06:06:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:42.551 06:06:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.551 06:06:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:42.551 06:06:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.551 06:06:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:42.551 06:06:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:42.551 06:06:48 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:42.551 06:06:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:42.551 06:06:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:42.551 06:06:48 -- common/autotest_common.sh@10 -- # set +x 00:11:42.551 06:06:48 -- nvmf/common.sh@469 -- # nvmfpid=1069349 00:11:42.551 06:06:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:42.551 06:06:48 -- nvmf/common.sh@470 -- # waitforlisten 1069349 00:11:42.551 06:06:48 -- common/autotest_common.sh@819 -- # '[' -z 1069349 ']' 00:11:42.551 06:06:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.551 06:06:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:42.552 06:06:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.552 06:06:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:42.552 06:06:48 -- common/autotest_common.sh@10 -- # set +x 00:11:42.552 [2024-07-13 06:06:48.741222] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:42.552 [2024-07-13 06:06:48.741297] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.552 EAL: No free 2048 kB hugepages reported on node 1 00:11:42.552 [2024-07-13 06:06:48.817181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:42.552 [2024-07-13 06:06:48.938998] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:42.552 [2024-07-13 06:06:48.939153] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.552 [2024-07-13 06:06:48.939173] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.552 [2024-07-13 06:06:48.939188] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.552 [2024-07-13 06:06:48.939254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.552 [2024-07-13 06:06:48.939320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.552 [2024-07-13 06:06:48.939324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.486 06:06:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:43.486 06:06:49 -- common/autotest_common.sh@852 -- # return 0 00:11:43.486 06:06:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:43.486 06:06:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:43.486 06:06:49 -- common/autotest_common.sh@10 -- # set +x 00:11:43.486 06:06:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.486 06:06:49 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:43.486 06:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:43.486 06:06:49 -- common/autotest_common.sh@10 -- # set +x 00:11:43.486 [2024-07-13 06:06:49.736088] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.486 06:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:43.486 06:06:49 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:43.486 06:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:43.486 06:06:49 -- common/autotest_common.sh@10 -- # set +x 00:11:43.486 Malloc0 00:11:43.486 06:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:43.486 06:06:49 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:43.486 06:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:43.486 06:06:49 -- common/autotest_common.sh@10 -- # set +x 00:11:43.486 Delay0 00:11:43.486 06:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:43.486 06:06:49 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:43.486 06:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:43.486 06:06:49 -- common/autotest_common.sh@10 -- # set +x 00:11:43.486 06:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:43.486 06:06:49 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:43.486 06:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:43.486 06:06:49 -- common/autotest_common.sh@10 -- # set +x 00:11:43.486 06:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:43.486 06:06:49 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:43.486 06:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:43.486 06:06:49 -- common/autotest_common.sh@10 -- # set +x 00:11:43.486 [2024-07-13 06:06:49.812235] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.486 06:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:43.486 06:06:49 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:43.486 06:06:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:43.486 06:06:49 -- common/autotest_common.sh@10 -- # set +x 00:11:43.486 06:06:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:43.486 06:06:49 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:43.486 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.486 [2024-07-13 06:06:49.918994] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:46.017 Initializing NVMe Controllers 00:11:46.017 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:46.017 controller IO queue size 128 less than required 00:11:46.017 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:46.017 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:46.017 Initialization complete. Launching workers. 00:11:46.017 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33750 00:11:46.017 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33811, failed to submit 62 00:11:46.017 success 33750, unsuccess 61, failed 0 00:11:46.017 06:06:52 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:46.017 06:06:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:46.017 06:06:52 -- common/autotest_common.sh@10 -- # set +x 00:11:46.017 06:06:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:46.017 06:06:52 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:46.017 06:06:52 -- target/abort.sh@38 -- # nvmftestfini 00:11:46.017 06:06:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:46.017 06:06:52 -- nvmf/common.sh@116 -- # sync 00:11:46.017 06:06:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:46.017 06:06:52 -- nvmf/common.sh@119 -- # set +e 00:11:46.017 06:06:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:46.017 06:06:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:46.017 rmmod nvme_tcp 00:11:46.017 rmmod nvme_fabrics 00:11:46.017 rmmod nvme_keyring 00:11:46.017 06:06:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:46.017 06:06:52 -- nvmf/common.sh@123 -- # set -e 00:11:46.017 06:06:52 -- nvmf/common.sh@124 -- # return 0 00:11:46.017 06:06:52 -- nvmf/common.sh@477 -- # '[' -n 1069349 ']' 00:11:46.017 06:06:52 -- nvmf/common.sh@478 -- # killprocess 1069349 00:11:46.017 06:06:52 -- common/autotest_common.sh@926 -- # '[' -z 1069349 ']' 00:11:46.017 06:06:52 -- common/autotest_common.sh@930 -- # kill -0 1069349 00:11:46.017 06:06:52 -- common/autotest_common.sh@931 -- # uname 00:11:46.017 06:06:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:46.017 06:06:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1069349 00:11:46.017 06:06:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:46.017 06:06:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:46.017 06:06:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1069349' 00:11:46.017 killing process with pid 1069349 00:11:46.017 06:06:52 -- common/autotest_common.sh@945 -- # kill 1069349 00:11:46.017 06:06:52 -- common/autotest_common.sh@950 -- # wait 1069349 00:11:46.017 06:06:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:46.017 06:06:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:46.017 06:06:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:46.017 06:06:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:46.017 06:06:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:46.017 06:06:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.017 06:06:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.017 06:06:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.557 06:06:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:48.557 00:11:48.557 real 0m8.026s 00:11:48.557 user 0m12.920s 00:11:48.557 sys 0m2.546s 00:11:48.557 06:06:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.557 06:06:54 -- common/autotest_common.sh@10 -- # set +x 00:11:48.557 ************************************ 00:11:48.557 END TEST nvmf_abort 00:11:48.557 ************************************ 00:11:48.557 06:06:54 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:48.557 06:06:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:48.557 06:06:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:48.557 06:06:54 -- common/autotest_common.sh@10 -- # set +x 00:11:48.557 ************************************ 00:11:48.557 START TEST nvmf_ns_hotplug_stress 00:11:48.557 ************************************ 00:11:48.557 06:06:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:48.557 * Looking for test storage... 00:11:48.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.557 06:06:54 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.557 06:06:54 -- nvmf/common.sh@7 -- # uname -s 00:11:48.557 06:06:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.557 06:06:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.557 06:06:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.557 06:06:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.557 06:06:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.557 06:06:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.557 06:06:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.557 06:06:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.557 06:06:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.557 06:06:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.558 06:06:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:48.558 06:06:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:48.558 06:06:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.558 06:06:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.558 06:06:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.558 06:06:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.558 06:06:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.558 06:06:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.558 06:06:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.558 06:06:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.558 06:06:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.558 06:06:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.558 06:06:54 -- paths/export.sh@5 -- # export PATH 00:11:48.558 06:06:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.558 06:06:54 -- nvmf/common.sh@46 -- # : 0 00:11:48.558 06:06:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:48.558 06:06:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:48.558 06:06:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:48.558 06:06:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.558 06:06:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.558 06:06:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:48.558 06:06:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:48.558 06:06:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:48.558 06:06:54 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:48.558 06:06:54 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:48.558 06:06:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:48.558 06:06:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.558 06:06:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:48.558 06:06:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:48.558 06:06:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:48.558 06:06:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.558 06:06:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.558 06:06:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.558 06:06:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:48.558 06:06:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:48.558 06:06:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:48.558 06:06:54 -- common/autotest_common.sh@10 -- # set +x 00:11:50.460 06:06:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:50.460 06:06:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:50.460 06:06:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:50.460 06:06:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:50.460 06:06:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:50.460 06:06:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:50.460 06:06:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:50.460 06:06:56 -- nvmf/common.sh@294 -- # net_devs=() 00:11:50.460 06:06:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:50.460 06:06:56 -- nvmf/common.sh@295 -- # e810=() 00:11:50.460 06:06:56 -- nvmf/common.sh@295 -- # local -ga e810 00:11:50.460 06:06:56 -- nvmf/common.sh@296 -- # x722=() 00:11:50.460 06:06:56 -- nvmf/common.sh@296 -- # local -ga x722 00:11:50.460 06:06:56 -- nvmf/common.sh@297 -- # mlx=() 00:11:50.460 06:06:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:50.460 06:06:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.461 06:06:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.461 06:06:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.461 06:06:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.461 06:06:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.461 06:06:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.461 06:06:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.461 06:06:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.461 06:06:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.461 06:06:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.461 06:06:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.461 06:06:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:50.461 06:06:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:50.461 06:06:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:50.461 06:06:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:50.461 06:06:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:50.461 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:50.461 06:06:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:50.461 06:06:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:50.461 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:50.461 06:06:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:50.461 06:06:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:50.461 06:06:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.461 06:06:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:50.461 06:06:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.461 06:06:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:50.461 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:50.461 06:06:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.461 06:06:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:50.461 06:06:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.461 06:06:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:50.461 06:06:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.461 06:06:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:50.461 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:50.461 06:06:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.461 06:06:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:50.461 06:06:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:50.461 06:06:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:50.461 06:06:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.461 06:06:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.461 06:06:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.461 06:06:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:50.461 06:06:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.461 06:06:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.461 06:06:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:50.461 06:06:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.461 06:06:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.461 06:06:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:50.461 06:06:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:50.461 06:06:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.461 06:06:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.461 06:06:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.461 06:06:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.461 06:06:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:50.461 06:06:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.461 06:06:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.461 06:06:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.461 06:06:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:50.461 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.461 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:11:50.461 00:11:50.461 --- 10.0.0.2 ping statistics --- 00:11:50.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.461 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:11:50.461 06:06:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.461 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.461 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:11:50.461 00:11:50.461 --- 10.0.0.1 ping statistics --- 00:11:50.461 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.461 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:11:50.461 06:06:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.461 06:06:56 -- nvmf/common.sh@410 -- # return 0 00:11:50.461 06:06:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:50.461 06:06:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.461 06:06:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:50.461 06:06:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.461 06:06:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:50.461 06:06:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:50.461 06:06:56 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:50.461 06:06:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:50.461 06:06:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:50.461 06:06:56 -- common/autotest_common.sh@10 -- # set +x 00:11:50.461 06:06:56 -- nvmf/common.sh@469 -- # nvmfpid=1071731 00:11:50.461 06:06:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:50.461 06:06:56 -- nvmf/common.sh@470 -- # waitforlisten 1071731 00:11:50.461 06:06:56 -- common/autotest_common.sh@819 -- # '[' -z 1071731 ']' 00:11:50.461 06:06:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.461 06:06:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:50.461 06:06:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.461 06:06:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:50.461 06:06:56 -- common/autotest_common.sh@10 -- # set +x 00:11:50.461 [2024-07-13 06:06:56.740597] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:50.461 [2024-07-13 06:06:56.740673] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.461 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.461 [2024-07-13 06:06:56.815196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:50.461 [2024-07-13 06:06:56.934476] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:50.461 [2024-07-13 06:06:56.934631] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.461 [2024-07-13 06:06:56.934652] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.461 [2024-07-13 06:06:56.934666] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.461 [2024-07-13 06:06:56.934769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.461 [2024-07-13 06:06:56.934815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.461 [2024-07-13 06:06:56.934818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.396 06:06:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:51.396 06:06:57 -- common/autotest_common.sh@852 -- # return 0 00:11:51.396 06:06:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:51.396 06:06:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:51.396 06:06:57 -- common/autotest_common.sh@10 -- # set +x 00:11:51.396 06:06:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.396 06:06:57 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:51.396 06:06:57 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:51.396 [2024-07-13 06:06:57.894526] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.653 06:06:57 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:51.911 06:06:58 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.911 [2024-07-13 06:06:58.389239] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.911 06:06:58 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:52.168 06:06:58 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:52.425 Malloc0 00:11:52.425 06:06:58 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:52.987 Delay0 00:11:52.987 06:06:59 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.987 06:06:59 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:53.243 NULL1 00:11:53.499 06:06:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:53.499 06:06:59 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1072170 00:11:53.499 06:06:59 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:53.499 06:06:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:11:53.499 06:06:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.755 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.122 Read completed with error (sct=0, sc=11) 00:11:55.122 06:07:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:55.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.122 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:55.122 06:07:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:55.122 06:07:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:55.379 true 00:11:55.379 06:07:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:11:55.379 06:07:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.309 06:07:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:56.309 06:07:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:56.309 06:07:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:56.565 true 00:11:56.565 06:07:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:11:56.565 06:07:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.822 06:07:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:57.079 06:07:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:57.079 06:07:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:57.336 true 00:11:57.336 06:07:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:11:57.336 06:07:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.266 06:07:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:58.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.266 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:58.523 06:07:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:58.523 06:07:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:58.781 true 00:11:58.781 06:07:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:11:58.781 06:07:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.037 06:07:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:59.294 06:07:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:59.294 06:07:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:59.552 true 00:11:59.552 06:07:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:11:59.552 06:07:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.484 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:00.484 06:07:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:00.741 06:07:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:00.741 06:07:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:01.005 true 00:12:01.005 06:07:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:01.005 06:07:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.005 06:07:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:01.297 06:07:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:01.297 06:07:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:01.555 true 00:12:01.555 06:07:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:01.555 06:07:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.488 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.488 06:07:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:02.746 06:07:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:02.746 06:07:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:03.004 true 00:12:03.004 06:07:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:03.004 06:07:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.261 06:07:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:03.518 06:07:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:03.518 06:07:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:03.776 true 00:12:03.777 06:07:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:03.777 06:07:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:04.708 06:07:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:04.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:04.965 06:07:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:04.966 06:07:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:05.224 true 00:12:05.224 06:07:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:05.224 06:07:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.482 06:07:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.739 06:07:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:05.739 06:07:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:05.739 true 00:12:05.739 06:07:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:05.739 06:07:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.671 06:07:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:06.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:06.927 06:07:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:06.927 06:07:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:07.185 true 00:12:07.185 06:07:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:07.185 06:07:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.443 06:07:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:07.700 06:07:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:07.700 06:07:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:07.958 true 00:12:07.958 06:07:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:07.958 06:07:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.890 06:07:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:08.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.148 06:07:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:09.148 06:07:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:09.405 true 00:12:09.405 06:07:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:09.405 06:07:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:09.662 06:07:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:09.920 06:07:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:09.920 06:07:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:10.179 true 00:12:10.179 06:07:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:10.179 06:07:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.112 06:07:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:11.112 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:11.369 06:07:17 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:11.369 06:07:17 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:11.626 true 00:12:11.626 06:07:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:11.626 06:07:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.884 06:07:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.142 06:07:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:12.142 06:07:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:12.399 true 00:12:12.399 06:07:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:12.399 06:07:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.372 06:07:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:13.372 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:13.630 06:07:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:13.630 06:07:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:13.887 true 00:12:13.887 06:07:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:13.887 06:07:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:14.145 06:07:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:14.402 06:07:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:14.402 06:07:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:14.402 true 00:12:14.402 06:07:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:14.402 06:07:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.334 06:07:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:15.334 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.592 06:07:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:15.592 06:07:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:15.849 true 00:12:15.849 06:07:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:15.849 06:07:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.106 06:07:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:16.363 06:07:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:16.363 06:07:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:16.619 true 00:12:16.619 06:07:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:16.619 06:07:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.549 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:17.549 06:07:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:17.807 06:07:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:17.807 06:07:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:18.063 true 00:12:18.063 06:07:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:18.063 06:07:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.320 06:07:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:18.320 06:07:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:18.320 06:07:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:18.577 true 00:12:18.577 06:07:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:18.577 06:07:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.948 06:07:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:19.948 06:07:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:19.948 06:07:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:20.206 true 00:12:20.206 06:07:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:20.206 06:07:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.464 06:07:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.721 06:07:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:20.721 06:07:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:20.979 true 00:12:20.979 06:07:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:20.979 06:07:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.914 06:07:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:21.914 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:21.914 06:07:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:21.914 06:07:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:22.172 true 00:12:22.172 06:07:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:22.172 06:07:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.429 06:07:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:22.687 06:07:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:22.687 06:07:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:22.944 true 00:12:22.944 06:07:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:22.944 06:07:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.878 06:07:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.137 Initializing NVMe Controllers 00:12:24.137 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:24.137 Controller IO queue size 128, less than required. 00:12:24.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:24.137 Controller IO queue size 128, less than required. 00:12:24.137 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:24.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:24.137 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:24.137 Initialization complete. Launching workers. 00:12:24.137 ======================================================== 00:12:24.137 Latency(us) 00:12:24.137 Device Information : IOPS MiB/s Average min max 00:12:24.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 955.06 0.47 75255.66 2271.95 1053844.79 00:12:24.137 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12932.57 6.31 9897.15 1572.39 443138.00 00:12:24.137 ======================================================== 00:12:24.137 Total : 13887.63 6.78 14391.90 1572.39 1053844.79 00:12:24.137 00:12:24.137 06:07:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:24.137 06:07:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:24.395 true 00:12:24.395 06:07:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1072170 00:12:24.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1072170) - No such process 00:12:24.395 06:07:30 -- target/ns_hotplug_stress.sh@53 -- # wait 1072170 00:12:24.395 06:07:30 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.652 06:07:31 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:24.910 06:07:31 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:24.910 06:07:31 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:24.910 06:07:31 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:24.910 06:07:31 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:24.910 06:07:31 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:25.178 null0 00:12:25.178 06:07:31 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:25.178 06:07:31 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:25.178 06:07:31 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:25.442 null1 00:12:25.442 06:07:31 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:25.442 06:07:31 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:25.442 06:07:31 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:25.700 null2 00:12:25.700 06:07:31 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:25.700 06:07:31 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:25.700 06:07:31 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:25.957 null3 00:12:25.957 06:07:32 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:25.957 06:07:32 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:25.957 06:07:32 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:26.214 null4 00:12:26.214 06:07:32 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:26.214 06:07:32 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:26.214 06:07:32 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:26.472 null5 00:12:26.472 06:07:32 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:26.472 06:07:32 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:26.472 06:07:32 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:26.729 null6 00:12:26.729 06:07:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:26.729 06:07:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:26.729 06:07:33 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:26.729 null7 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.986 06:07:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@66 -- # wait 1076327 1076328 1076329 1076332 1076334 1076336 1076338 1076340 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:26.987 06:07:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:27.244 06:07:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:27.244 06:07:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:27.244 06:07:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:27.244 06:07:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:27.244 06:07:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:27.244 06:07:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:27.244 06:07:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:27.244 06:07:33 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:27.502 06:07:33 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:27.759 06:07:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:27.759 06:07:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:27.759 06:07:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:27.759 06:07:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:27.759 06:07:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:27.759 06:07:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.759 06:07:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:27.759 06:07:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.017 06:07:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:28.275 06:07:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:28.275 06:07:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:28.275 06:07:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:28.275 06:07:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:28.275 06:07:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:28.275 06:07:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.275 06:07:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:28.275 06:07:34 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:28.532 06:07:34 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:28.788 06:07:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:28.788 06:07:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:28.788 06:07:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:28.788 06:07:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:28.788 06:07:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:28.788 06:07:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:28.788 06:07:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:28.788 06:07:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.045 06:07:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:29.302 06:07:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:29.302 06:07:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:29.302 06:07:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:29.302 06:07:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:29.302 06:07:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.302 06:07:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:29.302 06:07:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.302 06:07:35 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:29.559 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:29.560 06:07:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:29.816 06:07:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:29.816 06:07:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:29.816 06:07:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:29.816 06:07:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:29.816 06:07:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.816 06:07:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:29.816 06:07:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:29.816 06:07:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.087 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.087 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.087 06:07:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:30.087 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.087 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.087 06:07:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:30.087 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.087 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.087 06:07:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:30.087 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.087 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.088 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.088 06:07:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:30.088 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.088 06:07:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:30.088 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.088 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.088 06:07:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:30.088 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.088 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.088 06:07:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:30.088 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.088 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.088 06:07:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:30.344 06:07:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:30.344 06:07:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:30.344 06:07:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:30.344 06:07:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.344 06:07:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.344 06:07:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:30.344 06:07:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:30.344 06:07:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:30.601 06:07:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:30.858 06:07:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:30.858 06:07:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:30.858 06:07:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:30.858 06:07:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:30.858 06:07:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.858 06:07:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:30.858 06:07:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:30.858 06:07:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:31.115 06:07:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.115 06:07:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.115 06:07:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:31.115 06:07:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.115 06:07:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.115 06:07:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:31.115 06:07:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.115 06:07:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.115 06:07:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:31.115 06:07:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.116 06:07:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.116 06:07:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:31.116 06:07:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.116 06:07:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.116 06:07:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:31.116 06:07:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.116 06:07:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.116 06:07:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:31.116 06:07:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.116 06:07:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.116 06:07:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:31.116 06:07:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.116 06:07:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.116 06:07:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:31.374 06:07:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:31.374 06:07:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:31.374 06:07:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:31.374 06:07:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:31.374 06:07:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.374 06:07:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:31.374 06:07:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:31.374 06:07:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:31.632 06:07:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:31.891 06:07:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:31.891 06:07:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:31.891 06:07:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:31.891 06:07:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:31.891 06:07:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:31.891 06:07:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:31.891 06:07:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.891 06:07:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:32.149 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:32.149 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:32.149 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:32.149 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:32.149 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:32.149 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:32.149 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:32.149 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:32.149 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:32.149 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:32.149 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:32.149 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:32.149 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:32.149 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:32.149 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:32.149 06:07:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:32.149 06:07:38 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:32.149 06:07:38 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:32.149 06:07:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:32.149 06:07:38 -- nvmf/common.sh@116 -- # sync 00:12:32.149 06:07:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:32.149 06:07:38 -- nvmf/common.sh@119 -- # set +e 00:12:32.149 06:07:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:32.149 06:07:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:32.149 rmmod nvme_tcp 00:12:32.149 rmmod nvme_fabrics 00:12:32.149 rmmod nvme_keyring 00:12:32.407 06:07:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:32.407 06:07:38 -- nvmf/common.sh@123 -- # set -e 00:12:32.407 06:07:38 -- nvmf/common.sh@124 -- # return 0 00:12:32.407 06:07:38 -- nvmf/common.sh@477 -- # '[' -n 1071731 ']' 00:12:32.407 06:07:38 -- nvmf/common.sh@478 -- # killprocess 1071731 00:12:32.407 06:07:38 -- common/autotest_common.sh@926 -- # '[' -z 1071731 ']' 00:12:32.407 06:07:38 -- common/autotest_common.sh@930 -- # kill -0 1071731 00:12:32.407 06:07:38 -- common/autotest_common.sh@931 -- # uname 00:12:32.407 06:07:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:32.407 06:07:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1071731 00:12:32.407 06:07:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:32.407 06:07:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:32.407 06:07:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1071731' 00:12:32.407 killing process with pid 1071731 00:12:32.407 06:07:38 -- common/autotest_common.sh@945 -- # kill 1071731 00:12:32.407 06:07:38 -- common/autotest_common.sh@950 -- # wait 1071731 00:12:32.666 06:07:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:32.666 06:07:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:32.666 06:07:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:32.666 06:07:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:32.666 06:07:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:32.666 06:07:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.666 06:07:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.666 06:07:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.569 06:07:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:34.569 00:12:34.569 real 0m46.521s 00:12:34.569 user 3m29.001s 00:12:34.569 sys 0m15.854s 00:12:34.569 06:07:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:34.569 06:07:41 -- common/autotest_common.sh@10 -- # set +x 00:12:34.569 ************************************ 00:12:34.569 END TEST nvmf_ns_hotplug_stress 00:12:34.569 ************************************ 00:12:34.569 06:07:41 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:34.569 06:07:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:34.569 06:07:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:34.569 06:07:41 -- common/autotest_common.sh@10 -- # set +x 00:12:34.569 ************************************ 00:12:34.569 START TEST nvmf_connect_stress 00:12:34.569 ************************************ 00:12:34.569 06:07:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:34.827 * Looking for test storage... 00:12:34.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.827 06:07:41 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.827 06:07:41 -- nvmf/common.sh@7 -- # uname -s 00:12:34.827 06:07:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.827 06:07:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.827 06:07:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.827 06:07:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.827 06:07:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.827 06:07:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.827 06:07:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.827 06:07:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.827 06:07:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.827 06:07:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.827 06:07:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.827 06:07:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.827 06:07:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.827 06:07:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.827 06:07:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.827 06:07:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.827 06:07:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.827 06:07:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.827 06:07:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.827 06:07:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.828 06:07:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.828 06:07:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.828 06:07:41 -- paths/export.sh@5 -- # export PATH 00:12:34.828 06:07:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.828 06:07:41 -- nvmf/common.sh@46 -- # : 0 00:12:34.828 06:07:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:34.828 06:07:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:34.828 06:07:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:34.828 06:07:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.828 06:07:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.828 06:07:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:34.828 06:07:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:34.828 06:07:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:34.828 06:07:41 -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:34.828 06:07:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:34.828 06:07:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.828 06:07:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:34.828 06:07:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:34.828 06:07:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:34.828 06:07:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.828 06:07:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.828 06:07:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.828 06:07:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:34.828 06:07:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:34.828 06:07:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:34.828 06:07:41 -- common/autotest_common.sh@10 -- # set +x 00:12:36.725 06:07:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:36.725 06:07:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:36.725 06:07:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:36.725 06:07:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:36.725 06:07:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:36.725 06:07:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:36.725 06:07:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:36.725 06:07:42 -- nvmf/common.sh@294 -- # net_devs=() 00:12:36.725 06:07:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:36.725 06:07:42 -- nvmf/common.sh@295 -- # e810=() 00:12:36.725 06:07:42 -- nvmf/common.sh@295 -- # local -ga e810 00:12:36.725 06:07:42 -- nvmf/common.sh@296 -- # x722=() 00:12:36.725 06:07:42 -- nvmf/common.sh@296 -- # local -ga x722 00:12:36.725 06:07:42 -- nvmf/common.sh@297 -- # mlx=() 00:12:36.725 06:07:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:36.725 06:07:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.725 06:07:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.725 06:07:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.725 06:07:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.725 06:07:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.725 06:07:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.725 06:07:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.725 06:07:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.725 06:07:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.725 06:07:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.725 06:07:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.725 06:07:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:36.725 06:07:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:36.725 06:07:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:36.725 06:07:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:36.725 06:07:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:36.725 06:07:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:36.725 06:07:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:36.725 06:07:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:36.725 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:36.725 06:07:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:36.725 06:07:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:36.725 06:07:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.725 06:07:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.725 06:07:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:36.725 06:07:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:36.725 06:07:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:36.725 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:36.725 06:07:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:36.725 06:07:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:36.725 06:07:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.725 06:07:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.725 06:07:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:36.725 06:07:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:36.725 06:07:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:36.725 06:07:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:36.725 06:07:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:36.725 06:07:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.725 06:07:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:36.725 06:07:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.725 06:07:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:36.725 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:36.725 06:07:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.725 06:07:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:36.725 06:07:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.725 06:07:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:36.725 06:07:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.725 06:07:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:36.725 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:36.725 06:07:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.725 06:07:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:36.725 06:07:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:36.725 06:07:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:36.725 06:07:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:36.725 06:07:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:36.725 06:07:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.725 06:07:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.725 06:07:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.725 06:07:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:36.725 06:07:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.725 06:07:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.725 06:07:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:36.725 06:07:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.725 06:07:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.725 06:07:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:36.725 06:07:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:36.725 06:07:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.725 06:07:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.725 06:07:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.725 06:07:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.725 06:07:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:36.725 06:07:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.725 06:07:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.725 06:07:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.725 06:07:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:36.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:12:36.725 00:12:36.725 --- 10.0.0.2 ping statistics --- 00:12:36.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.725 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:12:36.725 06:07:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:12:36.725 00:12:36.725 --- 10.0.0.1 ping statistics --- 00:12:36.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.725 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:12:36.725 06:07:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.725 06:07:43 -- nvmf/common.sh@410 -- # return 0 00:12:36.725 06:07:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:36.725 06:07:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.725 06:07:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:36.725 06:07:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:36.725 06:07:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.725 06:07:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:36.725 06:07:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:36.725 06:07:43 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:36.726 06:07:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:36.726 06:07:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:36.726 06:07:43 -- common/autotest_common.sh@10 -- # set +x 00:12:36.726 06:07:43 -- nvmf/common.sh@469 -- # nvmfpid=1079111 00:12:36.726 06:07:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:36.726 06:07:43 -- nvmf/common.sh@470 -- # waitforlisten 1079111 00:12:36.726 06:07:43 -- common/autotest_common.sh@819 -- # '[' -z 1079111 ']' 00:12:36.726 06:07:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.726 06:07:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:36.726 06:07:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.726 06:07:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:36.726 06:07:43 -- common/autotest_common.sh@10 -- # set +x 00:12:36.726 [2024-07-13 06:07:43.175091] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:36.726 [2024-07-13 06:07:43.175162] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.726 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.983 [2024-07-13 06:07:43.242840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:36.983 [2024-07-13 06:07:43.358423] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:36.983 [2024-07-13 06:07:43.358593] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.983 [2024-07-13 06:07:43.358613] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.983 [2024-07-13 06:07:43.358628] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.983 [2024-07-13 06:07:43.358720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.983 [2024-07-13 06:07:43.358785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.983 [2024-07-13 06:07:43.358788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.914 06:07:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:37.914 06:07:44 -- common/autotest_common.sh@852 -- # return 0 00:12:37.915 06:07:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:37.915 06:07:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:37.915 06:07:44 -- common/autotest_common.sh@10 -- # set +x 00:12:37.915 06:07:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.915 06:07:44 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:37.915 06:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.915 06:07:44 -- common/autotest_common.sh@10 -- # set +x 00:12:37.915 [2024-07-13 06:07:44.120024] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.915 06:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.915 06:07:44 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:37.915 06:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.915 06:07:44 -- common/autotest_common.sh@10 -- # set +x 00:12:37.915 06:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.915 06:07:44 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.915 06:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.915 06:07:44 -- common/autotest_common.sh@10 -- # set +x 00:12:37.915 [2024-07-13 06:07:44.147980] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.915 06:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.915 06:07:44 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:37.915 06:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.915 06:07:44 -- common/autotest_common.sh@10 -- # set +x 00:12:37.915 NULL1 00:12:37.915 06:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:37.915 06:07:44 -- target/connect_stress.sh@21 -- # PERF_PID=1079270 00:12:37.915 06:07:44 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:37.915 06:07:44 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:37.915 06:07:44 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # seq 1 20 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:37.915 06:07:44 -- target/connect_stress.sh@28 -- # cat 00:12:37.915 06:07:44 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:37.915 06:07:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:37.915 06:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:37.915 06:07:44 -- common/autotest_common.sh@10 -- # set +x 00:12:38.177 06:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.177 06:07:44 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:38.177 06:07:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.177 06:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.177 06:07:44 -- common/autotest_common.sh@10 -- # set +x 00:12:38.449 06:07:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.449 06:07:44 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:38.449 06:07:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.449 06:07:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.449 06:07:44 -- common/autotest_common.sh@10 -- # set +x 00:12:38.706 06:07:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.706 06:07:45 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:38.706 06:07:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:38.706 06:07:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.706 06:07:45 -- common/autotest_common.sh@10 -- # set +x 00:12:39.270 06:07:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.270 06:07:45 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:39.271 06:07:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.271 06:07:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.271 06:07:45 -- common/autotest_common.sh@10 -- # set +x 00:12:39.528 06:07:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.528 06:07:45 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:39.528 06:07:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.528 06:07:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.528 06:07:45 -- common/autotest_common.sh@10 -- # set +x 00:12:39.784 06:07:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:39.784 06:07:46 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:39.784 06:07:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:39.784 06:07:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:39.784 06:07:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.041 06:07:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.041 06:07:46 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:40.041 06:07:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.041 06:07:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.041 06:07:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.299 06:07:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.299 06:07:46 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:40.299 06:07:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.299 06:07:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.299 06:07:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.862 06:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.863 06:07:47 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:40.863 06:07:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:40.863 06:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.863 06:07:47 -- common/autotest_common.sh@10 -- # set +x 00:12:41.120 06:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.120 06:07:47 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:41.120 06:07:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.120 06:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.120 06:07:47 -- common/autotest_common.sh@10 -- # set +x 00:12:41.377 06:07:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.377 06:07:47 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:41.377 06:07:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.377 06:07:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.377 06:07:47 -- common/autotest_common.sh@10 -- # set +x 00:12:41.635 06:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.635 06:07:48 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:41.635 06:07:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.635 06:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.635 06:07:48 -- common/autotest_common.sh@10 -- # set +x 00:12:41.892 06:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.892 06:07:48 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:41.892 06:07:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:41.892 06:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.892 06:07:48 -- common/autotest_common.sh@10 -- # set +x 00:12:42.458 06:07:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.458 06:07:48 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:42.458 06:07:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.458 06:07:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.458 06:07:48 -- common/autotest_common.sh@10 -- # set +x 00:12:42.715 06:07:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.715 06:07:49 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:42.715 06:07:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.715 06:07:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.715 06:07:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.973 06:07:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.973 06:07:49 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:42.973 06:07:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:42.973 06:07:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.973 06:07:49 -- common/autotest_common.sh@10 -- # set +x 00:12:43.230 06:07:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.230 06:07:49 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:43.230 06:07:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.230 06:07:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.230 06:07:49 -- common/autotest_common.sh@10 -- # set +x 00:12:43.487 06:07:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.487 06:07:49 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:43.487 06:07:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:43.487 06:07:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.487 06:07:49 -- common/autotest_common.sh@10 -- # set +x 00:12:44.051 06:07:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:44.051 06:07:50 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:44.051 06:07:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.051 06:07:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:44.051 06:07:50 -- common/autotest_common.sh@10 -- # set +x 00:12:44.308 06:07:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:44.308 06:07:50 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:44.308 06:07:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.308 06:07:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:44.308 06:07:50 -- common/autotest_common.sh@10 -- # set +x 00:12:44.566 06:07:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:44.566 06:07:50 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:44.566 06:07:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.566 06:07:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:44.566 06:07:50 -- common/autotest_common.sh@10 -- # set +x 00:12:44.822 06:07:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:44.822 06:07:51 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:44.822 06:07:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:44.822 06:07:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:44.822 06:07:51 -- common/autotest_common.sh@10 -- # set +x 00:12:45.387 06:07:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.387 06:07:51 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:45.387 06:07:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.387 06:07:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.387 06:07:51 -- common/autotest_common.sh@10 -- # set +x 00:12:45.644 06:07:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.644 06:07:51 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:45.644 06:07:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.644 06:07:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.644 06:07:51 -- common/autotest_common.sh@10 -- # set +x 00:12:45.902 06:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.902 06:07:52 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:45.902 06:07:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.902 06:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.902 06:07:52 -- common/autotest_common.sh@10 -- # set +x 00:12:46.160 06:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.160 06:07:52 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:46.160 06:07:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.160 06:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.160 06:07:52 -- common/autotest_common.sh@10 -- # set +x 00:12:46.417 06:07:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.417 06:07:52 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:46.417 06:07:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.417 06:07:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.417 06:07:52 -- common/autotest_common.sh@10 -- # set +x 00:12:46.981 06:07:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.981 06:07:53 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:46.981 06:07:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.981 06:07:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.981 06:07:53 -- common/autotest_common.sh@10 -- # set +x 00:12:47.238 06:07:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.238 06:07:53 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:47.238 06:07:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.238 06:07:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.238 06:07:53 -- common/autotest_common.sh@10 -- # set +x 00:12:47.496 06:07:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.496 06:07:53 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:47.496 06:07:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.496 06:07:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.496 06:07:53 -- common/autotest_common.sh@10 -- # set +x 00:12:47.754 06:07:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.754 06:07:54 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:47.754 06:07:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.754 06:07:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.754 06:07:54 -- common/autotest_common.sh@10 -- # set +x 00:12:48.012 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:48.012 06:07:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.012 06:07:54 -- target/connect_stress.sh@34 -- # kill -0 1079270 00:12:48.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1079270) - No such process 00:12:48.012 06:07:54 -- target/connect_stress.sh@38 -- # wait 1079270 00:12:48.012 06:07:54 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:48.012 06:07:54 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:48.012 06:07:54 -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:48.012 06:07:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:48.012 06:07:54 -- nvmf/common.sh@116 -- # sync 00:12:48.012 06:07:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:48.012 06:07:54 -- nvmf/common.sh@119 -- # set +e 00:12:48.012 06:07:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:48.012 06:07:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:48.012 rmmod nvme_tcp 00:12:48.012 rmmod nvme_fabrics 00:12:48.270 rmmod nvme_keyring 00:12:48.270 06:07:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:48.270 06:07:54 -- nvmf/common.sh@123 -- # set -e 00:12:48.270 06:07:54 -- nvmf/common.sh@124 -- # return 0 00:12:48.270 06:07:54 -- nvmf/common.sh@477 -- # '[' -n 1079111 ']' 00:12:48.270 06:07:54 -- nvmf/common.sh@478 -- # killprocess 1079111 00:12:48.270 06:07:54 -- common/autotest_common.sh@926 -- # '[' -z 1079111 ']' 00:12:48.270 06:07:54 -- common/autotest_common.sh@930 -- # kill -0 1079111 00:12:48.270 06:07:54 -- common/autotest_common.sh@931 -- # uname 00:12:48.270 06:07:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:48.270 06:07:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1079111 00:12:48.270 06:07:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:48.270 06:07:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:48.270 06:07:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1079111' 00:12:48.270 killing process with pid 1079111 00:12:48.270 06:07:54 -- common/autotest_common.sh@945 -- # kill 1079111 00:12:48.270 06:07:54 -- common/autotest_common.sh@950 -- # wait 1079111 00:12:48.529 06:07:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:48.529 06:07:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:48.529 06:07:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:48.529 06:07:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:48.529 06:07:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:48.529 06:07:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.529 06:07:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.529 06:07:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.430 06:07:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:50.430 00:12:50.430 real 0m15.840s 00:12:50.430 user 0m40.610s 00:12:50.430 sys 0m5.692s 00:12:50.430 06:07:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:50.430 06:07:56 -- common/autotest_common.sh@10 -- # set +x 00:12:50.430 ************************************ 00:12:50.430 END TEST nvmf_connect_stress 00:12:50.430 ************************************ 00:12:50.430 06:07:56 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:50.430 06:07:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:50.430 06:07:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:50.430 06:07:56 -- common/autotest_common.sh@10 -- # set +x 00:12:50.430 ************************************ 00:12:50.430 START TEST nvmf_fused_ordering 00:12:50.430 ************************************ 00:12:50.430 06:07:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:50.689 * Looking for test storage... 00:12:50.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:50.689 06:07:56 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:50.689 06:07:56 -- nvmf/common.sh@7 -- # uname -s 00:12:50.689 06:07:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.689 06:07:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.689 06:07:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.689 06:07:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.689 06:07:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.689 06:07:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.689 06:07:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.689 06:07:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.689 06:07:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.689 06:07:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:50.689 06:07:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:50.689 06:07:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:50.689 06:07:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:50.689 06:07:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:50.689 06:07:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:50.689 06:07:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:50.689 06:07:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:50.689 06:07:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:50.689 06:07:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:50.689 06:07:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.689 06:07:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.689 06:07:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.689 06:07:56 -- paths/export.sh@5 -- # export PATH 00:12:50.689 06:07:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:50.689 06:07:56 -- nvmf/common.sh@46 -- # : 0 00:12:50.689 06:07:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:50.689 06:07:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:50.689 06:07:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:50.689 06:07:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:50.689 06:07:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:50.689 06:07:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:50.689 06:07:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:50.689 06:07:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:50.689 06:07:56 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:50.689 06:07:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:50.689 06:07:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:50.689 06:07:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:50.689 06:07:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:50.689 06:07:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:50.689 06:07:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.689 06:07:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.689 06:07:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.689 06:07:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:50.689 06:07:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:50.689 06:07:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:50.689 06:07:56 -- common/autotest_common.sh@10 -- # set +x 00:12:52.600 06:07:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:52.600 06:07:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:52.600 06:07:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:52.600 06:07:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:52.600 06:07:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:52.600 06:07:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:52.600 06:07:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:52.600 06:07:58 -- nvmf/common.sh@294 -- # net_devs=() 00:12:52.600 06:07:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:52.600 06:07:58 -- nvmf/common.sh@295 -- # e810=() 00:12:52.600 06:07:58 -- nvmf/common.sh@295 -- # local -ga e810 00:12:52.600 06:07:58 -- nvmf/common.sh@296 -- # x722=() 00:12:52.600 06:07:58 -- nvmf/common.sh@296 -- # local -ga x722 00:12:52.600 06:07:58 -- nvmf/common.sh@297 -- # mlx=() 00:12:52.600 06:07:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:52.600 06:07:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.600 06:07:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.600 06:07:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.600 06:07:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.600 06:07:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.600 06:07:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.600 06:07:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.600 06:07:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.600 06:07:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.600 06:07:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.600 06:07:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.600 06:07:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:52.600 06:07:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:52.600 06:07:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:52.600 06:07:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:52.600 06:07:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:52.600 06:07:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:52.600 06:07:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:52.600 06:07:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:52.600 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:52.600 06:07:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:52.600 06:07:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:52.600 06:07:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.600 06:07:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.600 06:07:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:52.600 06:07:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:52.600 06:07:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:52.600 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:52.600 06:07:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:52.600 06:07:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:52.600 06:07:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.600 06:07:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.600 06:07:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:52.600 06:07:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:52.600 06:07:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:52.600 06:07:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:52.600 06:07:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:52.600 06:07:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.600 06:07:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:52.600 06:07:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.600 06:07:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:52.600 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:52.600 06:07:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.600 06:07:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:52.600 06:07:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.600 06:07:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:52.600 06:07:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.600 06:07:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:52.600 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:52.601 06:07:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.601 06:07:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:52.601 06:07:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:52.601 06:07:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:52.601 06:07:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:52.601 06:07:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:52.601 06:07:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.601 06:07:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.601 06:07:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.601 06:07:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:52.601 06:07:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.601 06:07:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.601 06:07:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:52.601 06:07:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.601 06:07:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.601 06:07:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:52.601 06:07:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:52.601 06:07:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.601 06:07:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.601 06:07:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.601 06:07:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.601 06:07:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:52.601 06:07:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.601 06:07:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.601 06:07:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.601 06:07:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:52.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:12:52.601 00:12:52.601 --- 10.0.0.2 ping statistics --- 00:12:52.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.601 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:12:52.601 06:07:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:12:52.601 00:12:52.601 --- 10.0.0.1 ping statistics --- 00:12:52.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.601 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:12:52.601 06:07:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.601 06:07:59 -- nvmf/common.sh@410 -- # return 0 00:12:52.601 06:07:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:52.601 06:07:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.601 06:07:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:52.601 06:07:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:52.601 06:07:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.601 06:07:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:52.601 06:07:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:52.601 06:07:59 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:52.601 06:07:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:52.601 06:07:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:52.601 06:07:59 -- common/autotest_common.sh@10 -- # set +x 00:12:52.601 06:07:59 -- nvmf/common.sh@469 -- # nvmfpid=1082465 00:12:52.601 06:07:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:52.601 06:07:59 -- nvmf/common.sh@470 -- # waitforlisten 1082465 00:12:52.601 06:07:59 -- common/autotest_common.sh@819 -- # '[' -z 1082465 ']' 00:12:52.601 06:07:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.601 06:07:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:52.601 06:07:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.601 06:07:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:52.601 06:07:59 -- common/autotest_common.sh@10 -- # set +x 00:12:52.858 [2024-07-13 06:07:59.113675] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:52.858 [2024-07-13 06:07:59.113742] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.858 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.858 [2024-07-13 06:07:59.177325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.858 [2024-07-13 06:07:59.284104] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:52.858 [2024-07-13 06:07:59.284271] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.858 [2024-07-13 06:07:59.284290] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.858 [2024-07-13 06:07:59.284303] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.858 [2024-07-13 06:07:59.284341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.791 06:08:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:53.791 06:08:00 -- common/autotest_common.sh@852 -- # return 0 00:12:53.791 06:08:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:53.791 06:08:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:53.791 06:08:00 -- common/autotest_common.sh@10 -- # set +x 00:12:53.791 06:08:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:53.791 06:08:00 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:53.791 06:08:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.791 06:08:00 -- common/autotest_common.sh@10 -- # set +x 00:12:53.791 [2024-07-13 06:08:00.079604] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:53.791 06:08:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.791 06:08:00 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:53.791 06:08:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.791 06:08:00 -- common/autotest_common.sh@10 -- # set +x 00:12:53.791 06:08:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.791 06:08:00 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:53.791 06:08:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.791 06:08:00 -- common/autotest_common.sh@10 -- # set +x 00:12:53.791 [2024-07-13 06:08:00.095760] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:53.791 06:08:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.791 06:08:00 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:53.791 06:08:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.791 06:08:00 -- common/autotest_common.sh@10 -- # set +x 00:12:53.791 NULL1 00:12:53.791 06:08:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.791 06:08:00 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:53.791 06:08:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.791 06:08:00 -- common/autotest_common.sh@10 -- # set +x 00:12:53.791 06:08:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.791 06:08:00 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:53.791 06:08:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.791 06:08:00 -- common/autotest_common.sh@10 -- # set +x 00:12:53.791 06:08:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.791 06:08:00 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:53.791 [2024-07-13 06:08:00.140370] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:53.791 [2024-07-13 06:08:00.140413] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1082619 ] 00:12:53.791 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.356 Attached to nqn.2016-06.io.spdk:cnode1 00:12:54.356 Namespace ID: 1 size: 1GB 00:12:54.356 fused_ordering(0) 00:12:54.356 fused_ordering(1) 00:12:54.356 fused_ordering(2) 00:12:54.356 fused_ordering(3) 00:12:54.356 fused_ordering(4) 00:12:54.356 fused_ordering(5) 00:12:54.356 fused_ordering(6) 00:12:54.356 fused_ordering(7) 00:12:54.356 fused_ordering(8) 00:12:54.356 fused_ordering(9) 00:12:54.356 fused_ordering(10) 00:12:54.356 fused_ordering(11) 00:12:54.356 fused_ordering(12) 00:12:54.356 fused_ordering(13) 00:12:54.356 fused_ordering(14) 00:12:54.356 fused_ordering(15) 00:12:54.356 fused_ordering(16) 00:12:54.356 fused_ordering(17) 00:12:54.356 fused_ordering(18) 00:12:54.356 fused_ordering(19) 00:12:54.356 fused_ordering(20) 00:12:54.356 fused_ordering(21) 00:12:54.356 fused_ordering(22) 00:12:54.356 fused_ordering(23) 00:12:54.356 fused_ordering(24) 00:12:54.356 fused_ordering(25) 00:12:54.356 fused_ordering(26) 00:12:54.356 fused_ordering(27) 00:12:54.356 fused_ordering(28) 00:12:54.356 fused_ordering(29) 00:12:54.356 fused_ordering(30) 00:12:54.356 fused_ordering(31) 00:12:54.356 fused_ordering(32) 00:12:54.356 fused_ordering(33) 00:12:54.356 fused_ordering(34) 00:12:54.356 fused_ordering(35) 00:12:54.356 fused_ordering(36) 00:12:54.356 fused_ordering(37) 00:12:54.356 fused_ordering(38) 00:12:54.356 fused_ordering(39) 00:12:54.356 fused_ordering(40) 00:12:54.356 fused_ordering(41) 00:12:54.356 fused_ordering(42) 00:12:54.356 fused_ordering(43) 00:12:54.356 fused_ordering(44) 00:12:54.356 fused_ordering(45) 00:12:54.356 fused_ordering(46) 00:12:54.356 fused_ordering(47) 00:12:54.356 fused_ordering(48) 00:12:54.356 fused_ordering(49) 00:12:54.356 fused_ordering(50) 00:12:54.356 fused_ordering(51) 00:12:54.356 fused_ordering(52) 00:12:54.356 fused_ordering(53) 00:12:54.356 fused_ordering(54) 00:12:54.356 fused_ordering(55) 00:12:54.356 fused_ordering(56) 00:12:54.356 fused_ordering(57) 00:12:54.356 fused_ordering(58) 00:12:54.356 fused_ordering(59) 00:12:54.356 fused_ordering(60) 00:12:54.356 fused_ordering(61) 00:12:54.356 fused_ordering(62) 00:12:54.356 fused_ordering(63) 00:12:54.356 fused_ordering(64) 00:12:54.356 fused_ordering(65) 00:12:54.356 fused_ordering(66) 00:12:54.356 fused_ordering(67) 00:12:54.356 fused_ordering(68) 00:12:54.356 fused_ordering(69) 00:12:54.356 fused_ordering(70) 00:12:54.356 fused_ordering(71) 00:12:54.356 fused_ordering(72) 00:12:54.356 fused_ordering(73) 00:12:54.356 fused_ordering(74) 00:12:54.356 fused_ordering(75) 00:12:54.356 fused_ordering(76) 00:12:54.356 fused_ordering(77) 00:12:54.356 fused_ordering(78) 00:12:54.356 fused_ordering(79) 00:12:54.356 fused_ordering(80) 00:12:54.356 fused_ordering(81) 00:12:54.356 fused_ordering(82) 00:12:54.356 fused_ordering(83) 00:12:54.356 fused_ordering(84) 00:12:54.356 fused_ordering(85) 00:12:54.356 fused_ordering(86) 00:12:54.356 fused_ordering(87) 00:12:54.356 fused_ordering(88) 00:12:54.356 fused_ordering(89) 00:12:54.356 fused_ordering(90) 00:12:54.356 fused_ordering(91) 00:12:54.356 fused_ordering(92) 00:12:54.356 fused_ordering(93) 00:12:54.356 fused_ordering(94) 00:12:54.356 fused_ordering(95) 00:12:54.356 fused_ordering(96) 00:12:54.356 fused_ordering(97) 00:12:54.356 fused_ordering(98) 00:12:54.356 fused_ordering(99) 00:12:54.356 fused_ordering(100) 00:12:54.356 fused_ordering(101) 00:12:54.356 fused_ordering(102) 00:12:54.356 fused_ordering(103) 00:12:54.356 fused_ordering(104) 00:12:54.356 fused_ordering(105) 00:12:54.356 fused_ordering(106) 00:12:54.356 fused_ordering(107) 00:12:54.356 fused_ordering(108) 00:12:54.356 fused_ordering(109) 00:12:54.356 fused_ordering(110) 00:12:54.356 fused_ordering(111) 00:12:54.356 fused_ordering(112) 00:12:54.356 fused_ordering(113) 00:12:54.356 fused_ordering(114) 00:12:54.356 fused_ordering(115) 00:12:54.356 fused_ordering(116) 00:12:54.356 fused_ordering(117) 00:12:54.356 fused_ordering(118) 00:12:54.356 fused_ordering(119) 00:12:54.356 fused_ordering(120) 00:12:54.356 fused_ordering(121) 00:12:54.356 fused_ordering(122) 00:12:54.356 fused_ordering(123) 00:12:54.356 fused_ordering(124) 00:12:54.356 fused_ordering(125) 00:12:54.356 fused_ordering(126) 00:12:54.356 fused_ordering(127) 00:12:54.356 fused_ordering(128) 00:12:54.356 fused_ordering(129) 00:12:54.356 fused_ordering(130) 00:12:54.356 fused_ordering(131) 00:12:54.356 fused_ordering(132) 00:12:54.356 fused_ordering(133) 00:12:54.356 fused_ordering(134) 00:12:54.356 fused_ordering(135) 00:12:54.356 fused_ordering(136) 00:12:54.356 fused_ordering(137) 00:12:54.356 fused_ordering(138) 00:12:54.356 fused_ordering(139) 00:12:54.356 fused_ordering(140) 00:12:54.356 fused_ordering(141) 00:12:54.356 fused_ordering(142) 00:12:54.356 fused_ordering(143) 00:12:54.356 fused_ordering(144) 00:12:54.356 fused_ordering(145) 00:12:54.356 fused_ordering(146) 00:12:54.356 fused_ordering(147) 00:12:54.356 fused_ordering(148) 00:12:54.356 fused_ordering(149) 00:12:54.356 fused_ordering(150) 00:12:54.356 fused_ordering(151) 00:12:54.356 fused_ordering(152) 00:12:54.356 fused_ordering(153) 00:12:54.356 fused_ordering(154) 00:12:54.356 fused_ordering(155) 00:12:54.356 fused_ordering(156) 00:12:54.356 fused_ordering(157) 00:12:54.356 fused_ordering(158) 00:12:54.356 fused_ordering(159) 00:12:54.356 fused_ordering(160) 00:12:54.356 fused_ordering(161) 00:12:54.356 fused_ordering(162) 00:12:54.356 fused_ordering(163) 00:12:54.356 fused_ordering(164) 00:12:54.356 fused_ordering(165) 00:12:54.356 fused_ordering(166) 00:12:54.356 fused_ordering(167) 00:12:54.356 fused_ordering(168) 00:12:54.356 fused_ordering(169) 00:12:54.356 fused_ordering(170) 00:12:54.356 fused_ordering(171) 00:12:54.356 fused_ordering(172) 00:12:54.356 fused_ordering(173) 00:12:54.356 fused_ordering(174) 00:12:54.356 fused_ordering(175) 00:12:54.356 fused_ordering(176) 00:12:54.356 fused_ordering(177) 00:12:54.356 fused_ordering(178) 00:12:54.356 fused_ordering(179) 00:12:54.356 fused_ordering(180) 00:12:54.356 fused_ordering(181) 00:12:54.356 fused_ordering(182) 00:12:54.356 fused_ordering(183) 00:12:54.356 fused_ordering(184) 00:12:54.356 fused_ordering(185) 00:12:54.356 fused_ordering(186) 00:12:54.356 fused_ordering(187) 00:12:54.356 fused_ordering(188) 00:12:54.356 fused_ordering(189) 00:12:54.356 fused_ordering(190) 00:12:54.356 fused_ordering(191) 00:12:54.356 fused_ordering(192) 00:12:54.356 fused_ordering(193) 00:12:54.356 fused_ordering(194) 00:12:54.356 fused_ordering(195) 00:12:54.356 fused_ordering(196) 00:12:54.356 fused_ordering(197) 00:12:54.356 fused_ordering(198) 00:12:54.356 fused_ordering(199) 00:12:54.356 fused_ordering(200) 00:12:54.356 fused_ordering(201) 00:12:54.356 fused_ordering(202) 00:12:54.356 fused_ordering(203) 00:12:54.356 fused_ordering(204) 00:12:54.356 fused_ordering(205) 00:12:54.613 fused_ordering(206) 00:12:54.613 fused_ordering(207) 00:12:54.613 fused_ordering(208) 00:12:54.613 fused_ordering(209) 00:12:54.613 fused_ordering(210) 00:12:54.613 fused_ordering(211) 00:12:54.613 fused_ordering(212) 00:12:54.613 fused_ordering(213) 00:12:54.613 fused_ordering(214) 00:12:54.613 fused_ordering(215) 00:12:54.613 fused_ordering(216) 00:12:54.613 fused_ordering(217) 00:12:54.613 fused_ordering(218) 00:12:54.613 fused_ordering(219) 00:12:54.613 fused_ordering(220) 00:12:54.613 fused_ordering(221) 00:12:54.613 fused_ordering(222) 00:12:54.613 fused_ordering(223) 00:12:54.613 fused_ordering(224) 00:12:54.613 fused_ordering(225) 00:12:54.613 fused_ordering(226) 00:12:54.613 fused_ordering(227) 00:12:54.613 fused_ordering(228) 00:12:54.613 fused_ordering(229) 00:12:54.613 fused_ordering(230) 00:12:54.613 fused_ordering(231) 00:12:54.613 fused_ordering(232) 00:12:54.613 fused_ordering(233) 00:12:54.613 fused_ordering(234) 00:12:54.613 fused_ordering(235) 00:12:54.613 fused_ordering(236) 00:12:54.613 fused_ordering(237) 00:12:54.613 fused_ordering(238) 00:12:54.613 fused_ordering(239) 00:12:54.613 fused_ordering(240) 00:12:54.613 fused_ordering(241) 00:12:54.613 fused_ordering(242) 00:12:54.613 fused_ordering(243) 00:12:54.613 fused_ordering(244) 00:12:54.613 fused_ordering(245) 00:12:54.613 fused_ordering(246) 00:12:54.613 fused_ordering(247) 00:12:54.613 fused_ordering(248) 00:12:54.613 fused_ordering(249) 00:12:54.613 fused_ordering(250) 00:12:54.613 fused_ordering(251) 00:12:54.613 fused_ordering(252) 00:12:54.613 fused_ordering(253) 00:12:54.613 fused_ordering(254) 00:12:54.613 fused_ordering(255) 00:12:54.613 fused_ordering(256) 00:12:54.613 fused_ordering(257) 00:12:54.613 fused_ordering(258) 00:12:54.613 fused_ordering(259) 00:12:54.613 fused_ordering(260) 00:12:54.613 fused_ordering(261) 00:12:54.613 fused_ordering(262) 00:12:54.613 fused_ordering(263) 00:12:54.613 fused_ordering(264) 00:12:54.613 fused_ordering(265) 00:12:54.613 fused_ordering(266) 00:12:54.613 fused_ordering(267) 00:12:54.613 fused_ordering(268) 00:12:54.613 fused_ordering(269) 00:12:54.613 fused_ordering(270) 00:12:54.613 fused_ordering(271) 00:12:54.613 fused_ordering(272) 00:12:54.613 fused_ordering(273) 00:12:54.613 fused_ordering(274) 00:12:54.613 fused_ordering(275) 00:12:54.613 fused_ordering(276) 00:12:54.613 fused_ordering(277) 00:12:54.613 fused_ordering(278) 00:12:54.613 fused_ordering(279) 00:12:54.613 fused_ordering(280) 00:12:54.613 fused_ordering(281) 00:12:54.613 fused_ordering(282) 00:12:54.613 fused_ordering(283) 00:12:54.613 fused_ordering(284) 00:12:54.613 fused_ordering(285) 00:12:54.613 fused_ordering(286) 00:12:54.613 fused_ordering(287) 00:12:54.613 fused_ordering(288) 00:12:54.613 fused_ordering(289) 00:12:54.613 fused_ordering(290) 00:12:54.613 fused_ordering(291) 00:12:54.613 fused_ordering(292) 00:12:54.613 fused_ordering(293) 00:12:54.613 fused_ordering(294) 00:12:54.613 fused_ordering(295) 00:12:54.613 fused_ordering(296) 00:12:54.613 fused_ordering(297) 00:12:54.613 fused_ordering(298) 00:12:54.613 fused_ordering(299) 00:12:54.613 fused_ordering(300) 00:12:54.613 fused_ordering(301) 00:12:54.613 fused_ordering(302) 00:12:54.613 fused_ordering(303) 00:12:54.613 fused_ordering(304) 00:12:54.613 fused_ordering(305) 00:12:54.613 fused_ordering(306) 00:12:54.613 fused_ordering(307) 00:12:54.613 fused_ordering(308) 00:12:54.613 fused_ordering(309) 00:12:54.613 fused_ordering(310) 00:12:54.613 fused_ordering(311) 00:12:54.613 fused_ordering(312) 00:12:54.613 fused_ordering(313) 00:12:54.613 fused_ordering(314) 00:12:54.613 fused_ordering(315) 00:12:54.613 fused_ordering(316) 00:12:54.613 fused_ordering(317) 00:12:54.613 fused_ordering(318) 00:12:54.613 fused_ordering(319) 00:12:54.613 fused_ordering(320) 00:12:54.613 fused_ordering(321) 00:12:54.613 fused_ordering(322) 00:12:54.613 fused_ordering(323) 00:12:54.613 fused_ordering(324) 00:12:54.613 fused_ordering(325) 00:12:54.613 fused_ordering(326) 00:12:54.613 fused_ordering(327) 00:12:54.613 fused_ordering(328) 00:12:54.613 fused_ordering(329) 00:12:54.613 fused_ordering(330) 00:12:54.613 fused_ordering(331) 00:12:54.613 fused_ordering(332) 00:12:54.613 fused_ordering(333) 00:12:54.613 fused_ordering(334) 00:12:54.613 fused_ordering(335) 00:12:54.613 fused_ordering(336) 00:12:54.613 fused_ordering(337) 00:12:54.613 fused_ordering(338) 00:12:54.613 fused_ordering(339) 00:12:54.613 fused_ordering(340) 00:12:54.613 fused_ordering(341) 00:12:54.613 fused_ordering(342) 00:12:54.613 fused_ordering(343) 00:12:54.613 fused_ordering(344) 00:12:54.613 fused_ordering(345) 00:12:54.613 fused_ordering(346) 00:12:54.613 fused_ordering(347) 00:12:54.613 fused_ordering(348) 00:12:54.613 fused_ordering(349) 00:12:54.613 fused_ordering(350) 00:12:54.613 fused_ordering(351) 00:12:54.613 fused_ordering(352) 00:12:54.613 fused_ordering(353) 00:12:54.613 fused_ordering(354) 00:12:54.613 fused_ordering(355) 00:12:54.613 fused_ordering(356) 00:12:54.613 fused_ordering(357) 00:12:54.613 fused_ordering(358) 00:12:54.613 fused_ordering(359) 00:12:54.613 fused_ordering(360) 00:12:54.613 fused_ordering(361) 00:12:54.613 fused_ordering(362) 00:12:54.613 fused_ordering(363) 00:12:54.613 fused_ordering(364) 00:12:54.613 fused_ordering(365) 00:12:54.613 fused_ordering(366) 00:12:54.613 fused_ordering(367) 00:12:54.613 fused_ordering(368) 00:12:54.613 fused_ordering(369) 00:12:54.613 fused_ordering(370) 00:12:54.613 fused_ordering(371) 00:12:54.613 fused_ordering(372) 00:12:54.613 fused_ordering(373) 00:12:54.613 fused_ordering(374) 00:12:54.613 fused_ordering(375) 00:12:54.613 fused_ordering(376) 00:12:54.613 fused_ordering(377) 00:12:54.613 fused_ordering(378) 00:12:54.613 fused_ordering(379) 00:12:54.613 fused_ordering(380) 00:12:54.613 fused_ordering(381) 00:12:54.613 fused_ordering(382) 00:12:54.613 fused_ordering(383) 00:12:54.613 fused_ordering(384) 00:12:54.613 fused_ordering(385) 00:12:54.613 fused_ordering(386) 00:12:54.613 fused_ordering(387) 00:12:54.613 fused_ordering(388) 00:12:54.613 fused_ordering(389) 00:12:54.613 fused_ordering(390) 00:12:54.613 fused_ordering(391) 00:12:54.613 fused_ordering(392) 00:12:54.613 fused_ordering(393) 00:12:54.613 fused_ordering(394) 00:12:54.613 fused_ordering(395) 00:12:54.613 fused_ordering(396) 00:12:54.613 fused_ordering(397) 00:12:54.613 fused_ordering(398) 00:12:54.613 fused_ordering(399) 00:12:54.613 fused_ordering(400) 00:12:54.613 fused_ordering(401) 00:12:54.613 fused_ordering(402) 00:12:54.613 fused_ordering(403) 00:12:54.613 fused_ordering(404) 00:12:54.613 fused_ordering(405) 00:12:54.613 fused_ordering(406) 00:12:54.613 fused_ordering(407) 00:12:54.613 fused_ordering(408) 00:12:54.613 fused_ordering(409) 00:12:54.613 fused_ordering(410) 00:12:55.177 fused_ordering(411) 00:12:55.177 fused_ordering(412) 00:12:55.177 fused_ordering(413) 00:12:55.177 fused_ordering(414) 00:12:55.177 fused_ordering(415) 00:12:55.177 fused_ordering(416) 00:12:55.177 fused_ordering(417) 00:12:55.177 fused_ordering(418) 00:12:55.177 fused_ordering(419) 00:12:55.177 fused_ordering(420) 00:12:55.177 fused_ordering(421) 00:12:55.177 fused_ordering(422) 00:12:55.177 fused_ordering(423) 00:12:55.177 fused_ordering(424) 00:12:55.177 fused_ordering(425) 00:12:55.177 fused_ordering(426) 00:12:55.177 fused_ordering(427) 00:12:55.177 fused_ordering(428) 00:12:55.177 fused_ordering(429) 00:12:55.177 fused_ordering(430) 00:12:55.177 fused_ordering(431) 00:12:55.177 fused_ordering(432) 00:12:55.177 fused_ordering(433) 00:12:55.177 fused_ordering(434) 00:12:55.177 fused_ordering(435) 00:12:55.177 fused_ordering(436) 00:12:55.177 fused_ordering(437) 00:12:55.177 fused_ordering(438) 00:12:55.177 fused_ordering(439) 00:12:55.177 fused_ordering(440) 00:12:55.177 fused_ordering(441) 00:12:55.177 fused_ordering(442) 00:12:55.177 fused_ordering(443) 00:12:55.177 fused_ordering(444) 00:12:55.177 fused_ordering(445) 00:12:55.177 fused_ordering(446) 00:12:55.177 fused_ordering(447) 00:12:55.177 fused_ordering(448) 00:12:55.177 fused_ordering(449) 00:12:55.177 fused_ordering(450) 00:12:55.177 fused_ordering(451) 00:12:55.177 fused_ordering(452) 00:12:55.177 fused_ordering(453) 00:12:55.177 fused_ordering(454) 00:12:55.177 fused_ordering(455) 00:12:55.177 fused_ordering(456) 00:12:55.177 fused_ordering(457) 00:12:55.177 fused_ordering(458) 00:12:55.177 fused_ordering(459) 00:12:55.177 fused_ordering(460) 00:12:55.177 fused_ordering(461) 00:12:55.177 fused_ordering(462) 00:12:55.177 fused_ordering(463) 00:12:55.177 fused_ordering(464) 00:12:55.177 fused_ordering(465) 00:12:55.177 fused_ordering(466) 00:12:55.177 fused_ordering(467) 00:12:55.177 fused_ordering(468) 00:12:55.177 fused_ordering(469) 00:12:55.177 fused_ordering(470) 00:12:55.177 fused_ordering(471) 00:12:55.177 fused_ordering(472) 00:12:55.177 fused_ordering(473) 00:12:55.177 fused_ordering(474) 00:12:55.177 fused_ordering(475) 00:12:55.177 fused_ordering(476) 00:12:55.177 fused_ordering(477) 00:12:55.177 fused_ordering(478) 00:12:55.177 fused_ordering(479) 00:12:55.177 fused_ordering(480) 00:12:55.177 fused_ordering(481) 00:12:55.177 fused_ordering(482) 00:12:55.177 fused_ordering(483) 00:12:55.177 fused_ordering(484) 00:12:55.177 fused_ordering(485) 00:12:55.177 fused_ordering(486) 00:12:55.177 fused_ordering(487) 00:12:55.177 fused_ordering(488) 00:12:55.177 fused_ordering(489) 00:12:55.177 fused_ordering(490) 00:12:55.177 fused_ordering(491) 00:12:55.177 fused_ordering(492) 00:12:55.177 fused_ordering(493) 00:12:55.177 fused_ordering(494) 00:12:55.177 fused_ordering(495) 00:12:55.177 fused_ordering(496) 00:12:55.177 fused_ordering(497) 00:12:55.177 fused_ordering(498) 00:12:55.177 fused_ordering(499) 00:12:55.177 fused_ordering(500) 00:12:55.177 fused_ordering(501) 00:12:55.177 fused_ordering(502) 00:12:55.177 fused_ordering(503) 00:12:55.177 fused_ordering(504) 00:12:55.177 fused_ordering(505) 00:12:55.177 fused_ordering(506) 00:12:55.177 fused_ordering(507) 00:12:55.177 fused_ordering(508) 00:12:55.177 fused_ordering(509) 00:12:55.177 fused_ordering(510) 00:12:55.177 fused_ordering(511) 00:12:55.177 fused_ordering(512) 00:12:55.177 fused_ordering(513) 00:12:55.177 fused_ordering(514) 00:12:55.177 fused_ordering(515) 00:12:55.177 fused_ordering(516) 00:12:55.177 fused_ordering(517) 00:12:55.177 fused_ordering(518) 00:12:55.177 fused_ordering(519) 00:12:55.177 fused_ordering(520) 00:12:55.177 fused_ordering(521) 00:12:55.177 fused_ordering(522) 00:12:55.177 fused_ordering(523) 00:12:55.177 fused_ordering(524) 00:12:55.177 fused_ordering(525) 00:12:55.177 fused_ordering(526) 00:12:55.177 fused_ordering(527) 00:12:55.177 fused_ordering(528) 00:12:55.177 fused_ordering(529) 00:12:55.177 fused_ordering(530) 00:12:55.177 fused_ordering(531) 00:12:55.177 fused_ordering(532) 00:12:55.177 fused_ordering(533) 00:12:55.177 fused_ordering(534) 00:12:55.177 fused_ordering(535) 00:12:55.177 fused_ordering(536) 00:12:55.178 fused_ordering(537) 00:12:55.178 fused_ordering(538) 00:12:55.178 fused_ordering(539) 00:12:55.178 fused_ordering(540) 00:12:55.178 fused_ordering(541) 00:12:55.178 fused_ordering(542) 00:12:55.178 fused_ordering(543) 00:12:55.178 fused_ordering(544) 00:12:55.178 fused_ordering(545) 00:12:55.178 fused_ordering(546) 00:12:55.178 fused_ordering(547) 00:12:55.178 fused_ordering(548) 00:12:55.178 fused_ordering(549) 00:12:55.178 fused_ordering(550) 00:12:55.178 fused_ordering(551) 00:12:55.178 fused_ordering(552) 00:12:55.178 fused_ordering(553) 00:12:55.178 fused_ordering(554) 00:12:55.178 fused_ordering(555) 00:12:55.178 fused_ordering(556) 00:12:55.178 fused_ordering(557) 00:12:55.178 fused_ordering(558) 00:12:55.178 fused_ordering(559) 00:12:55.178 fused_ordering(560) 00:12:55.178 fused_ordering(561) 00:12:55.178 fused_ordering(562) 00:12:55.178 fused_ordering(563) 00:12:55.178 fused_ordering(564) 00:12:55.178 fused_ordering(565) 00:12:55.178 fused_ordering(566) 00:12:55.178 fused_ordering(567) 00:12:55.178 fused_ordering(568) 00:12:55.178 fused_ordering(569) 00:12:55.178 fused_ordering(570) 00:12:55.178 fused_ordering(571) 00:12:55.178 fused_ordering(572) 00:12:55.178 fused_ordering(573) 00:12:55.178 fused_ordering(574) 00:12:55.178 fused_ordering(575) 00:12:55.178 fused_ordering(576) 00:12:55.178 fused_ordering(577) 00:12:55.178 fused_ordering(578) 00:12:55.178 fused_ordering(579) 00:12:55.178 fused_ordering(580) 00:12:55.178 fused_ordering(581) 00:12:55.178 fused_ordering(582) 00:12:55.178 fused_ordering(583) 00:12:55.178 fused_ordering(584) 00:12:55.178 fused_ordering(585) 00:12:55.178 fused_ordering(586) 00:12:55.178 fused_ordering(587) 00:12:55.178 fused_ordering(588) 00:12:55.178 fused_ordering(589) 00:12:55.178 fused_ordering(590) 00:12:55.178 fused_ordering(591) 00:12:55.178 fused_ordering(592) 00:12:55.178 fused_ordering(593) 00:12:55.178 fused_ordering(594) 00:12:55.178 fused_ordering(595) 00:12:55.178 fused_ordering(596) 00:12:55.178 fused_ordering(597) 00:12:55.178 fused_ordering(598) 00:12:55.178 fused_ordering(599) 00:12:55.178 fused_ordering(600) 00:12:55.178 fused_ordering(601) 00:12:55.178 fused_ordering(602) 00:12:55.178 fused_ordering(603) 00:12:55.178 fused_ordering(604) 00:12:55.178 fused_ordering(605) 00:12:55.178 fused_ordering(606) 00:12:55.178 fused_ordering(607) 00:12:55.178 fused_ordering(608) 00:12:55.178 fused_ordering(609) 00:12:55.178 fused_ordering(610) 00:12:55.178 fused_ordering(611) 00:12:55.178 fused_ordering(612) 00:12:55.178 fused_ordering(613) 00:12:55.178 fused_ordering(614) 00:12:55.178 fused_ordering(615) 00:12:55.744 fused_ordering(616) 00:12:55.744 fused_ordering(617) 00:12:55.745 fused_ordering(618) 00:12:55.745 fused_ordering(619) 00:12:55.745 fused_ordering(620) 00:12:55.745 fused_ordering(621) 00:12:55.745 fused_ordering(622) 00:12:55.745 fused_ordering(623) 00:12:55.745 fused_ordering(624) 00:12:55.745 fused_ordering(625) 00:12:55.745 fused_ordering(626) 00:12:55.745 fused_ordering(627) 00:12:55.745 fused_ordering(628) 00:12:55.745 fused_ordering(629) 00:12:55.745 fused_ordering(630) 00:12:55.745 fused_ordering(631) 00:12:55.745 fused_ordering(632) 00:12:55.745 fused_ordering(633) 00:12:55.745 fused_ordering(634) 00:12:55.745 fused_ordering(635) 00:12:55.745 fused_ordering(636) 00:12:55.745 fused_ordering(637) 00:12:55.745 fused_ordering(638) 00:12:55.745 fused_ordering(639) 00:12:55.745 fused_ordering(640) 00:12:55.745 fused_ordering(641) 00:12:55.745 fused_ordering(642) 00:12:55.745 fused_ordering(643) 00:12:55.745 fused_ordering(644) 00:12:55.745 fused_ordering(645) 00:12:55.745 fused_ordering(646) 00:12:55.745 fused_ordering(647) 00:12:55.745 fused_ordering(648) 00:12:55.745 fused_ordering(649) 00:12:55.745 fused_ordering(650) 00:12:55.745 fused_ordering(651) 00:12:55.745 fused_ordering(652) 00:12:55.745 fused_ordering(653) 00:12:55.745 fused_ordering(654) 00:12:55.745 fused_ordering(655) 00:12:55.745 fused_ordering(656) 00:12:55.745 fused_ordering(657) 00:12:55.745 fused_ordering(658) 00:12:55.745 fused_ordering(659) 00:12:55.745 fused_ordering(660) 00:12:55.745 fused_ordering(661) 00:12:55.745 fused_ordering(662) 00:12:55.745 fused_ordering(663) 00:12:55.745 fused_ordering(664) 00:12:55.745 fused_ordering(665) 00:12:55.745 fused_ordering(666) 00:12:55.745 fused_ordering(667) 00:12:55.745 fused_ordering(668) 00:12:55.745 fused_ordering(669) 00:12:55.745 fused_ordering(670) 00:12:55.745 fused_ordering(671) 00:12:55.745 fused_ordering(672) 00:12:55.745 fused_ordering(673) 00:12:55.745 fused_ordering(674) 00:12:55.745 fused_ordering(675) 00:12:55.745 fused_ordering(676) 00:12:55.745 fused_ordering(677) 00:12:55.745 fused_ordering(678) 00:12:55.745 fused_ordering(679) 00:12:55.745 fused_ordering(680) 00:12:55.745 fused_ordering(681) 00:12:55.745 fused_ordering(682) 00:12:55.745 fused_ordering(683) 00:12:55.745 fused_ordering(684) 00:12:55.745 fused_ordering(685) 00:12:55.745 fused_ordering(686) 00:12:55.745 fused_ordering(687) 00:12:55.745 fused_ordering(688) 00:12:55.745 fused_ordering(689) 00:12:55.745 fused_ordering(690) 00:12:55.745 fused_ordering(691) 00:12:55.745 fused_ordering(692) 00:12:55.745 fused_ordering(693) 00:12:55.745 fused_ordering(694) 00:12:55.745 fused_ordering(695) 00:12:55.745 fused_ordering(696) 00:12:55.745 fused_ordering(697) 00:12:55.745 fused_ordering(698) 00:12:55.745 fused_ordering(699) 00:12:55.745 fused_ordering(700) 00:12:55.745 fused_ordering(701) 00:12:55.745 fused_ordering(702) 00:12:55.745 fused_ordering(703) 00:12:55.745 fused_ordering(704) 00:12:55.745 fused_ordering(705) 00:12:55.745 fused_ordering(706) 00:12:55.745 fused_ordering(707) 00:12:55.745 fused_ordering(708) 00:12:55.745 fused_ordering(709) 00:12:55.745 fused_ordering(710) 00:12:55.745 fused_ordering(711) 00:12:55.745 fused_ordering(712) 00:12:55.745 fused_ordering(713) 00:12:55.745 fused_ordering(714) 00:12:55.745 fused_ordering(715) 00:12:55.745 fused_ordering(716) 00:12:55.745 fused_ordering(717) 00:12:55.745 fused_ordering(718) 00:12:55.745 fused_ordering(719) 00:12:55.745 fused_ordering(720) 00:12:55.745 fused_ordering(721) 00:12:55.745 fused_ordering(722) 00:12:55.745 fused_ordering(723) 00:12:55.745 fused_ordering(724) 00:12:55.745 fused_ordering(725) 00:12:55.745 fused_ordering(726) 00:12:55.745 fused_ordering(727) 00:12:55.745 fused_ordering(728) 00:12:55.745 fused_ordering(729) 00:12:55.745 fused_ordering(730) 00:12:55.745 fused_ordering(731) 00:12:55.745 fused_ordering(732) 00:12:55.745 fused_ordering(733) 00:12:55.745 fused_ordering(734) 00:12:55.745 fused_ordering(735) 00:12:55.745 fused_ordering(736) 00:12:55.745 fused_ordering(737) 00:12:55.745 fused_ordering(738) 00:12:55.745 fused_ordering(739) 00:12:55.745 fused_ordering(740) 00:12:55.745 fused_ordering(741) 00:12:55.745 fused_ordering(742) 00:12:55.745 fused_ordering(743) 00:12:55.745 fused_ordering(744) 00:12:55.745 fused_ordering(745) 00:12:55.745 fused_ordering(746) 00:12:55.745 fused_ordering(747) 00:12:55.745 fused_ordering(748) 00:12:55.745 fused_ordering(749) 00:12:55.745 fused_ordering(750) 00:12:55.745 fused_ordering(751) 00:12:55.745 fused_ordering(752) 00:12:55.745 fused_ordering(753) 00:12:55.745 fused_ordering(754) 00:12:55.745 fused_ordering(755) 00:12:55.745 fused_ordering(756) 00:12:55.745 fused_ordering(757) 00:12:55.745 fused_ordering(758) 00:12:55.745 fused_ordering(759) 00:12:55.745 fused_ordering(760) 00:12:55.745 fused_ordering(761) 00:12:55.745 fused_ordering(762) 00:12:55.745 fused_ordering(763) 00:12:55.745 fused_ordering(764) 00:12:55.745 fused_ordering(765) 00:12:55.745 fused_ordering(766) 00:12:55.745 fused_ordering(767) 00:12:55.745 fused_ordering(768) 00:12:55.745 fused_ordering(769) 00:12:55.745 fused_ordering(770) 00:12:55.745 fused_ordering(771) 00:12:55.745 fused_ordering(772) 00:12:55.745 fused_ordering(773) 00:12:55.745 fused_ordering(774) 00:12:55.745 fused_ordering(775) 00:12:55.745 fused_ordering(776) 00:12:55.745 fused_ordering(777) 00:12:55.745 fused_ordering(778) 00:12:55.745 fused_ordering(779) 00:12:55.745 fused_ordering(780) 00:12:55.745 fused_ordering(781) 00:12:55.745 fused_ordering(782) 00:12:55.745 fused_ordering(783) 00:12:55.745 fused_ordering(784) 00:12:55.745 fused_ordering(785) 00:12:55.745 fused_ordering(786) 00:12:55.745 fused_ordering(787) 00:12:55.745 fused_ordering(788) 00:12:55.745 fused_ordering(789) 00:12:55.745 fused_ordering(790) 00:12:55.745 fused_ordering(791) 00:12:55.745 fused_ordering(792) 00:12:55.745 fused_ordering(793) 00:12:55.745 fused_ordering(794) 00:12:55.745 fused_ordering(795) 00:12:55.745 fused_ordering(796) 00:12:55.745 fused_ordering(797) 00:12:55.745 fused_ordering(798) 00:12:55.745 fused_ordering(799) 00:12:55.745 fused_ordering(800) 00:12:55.745 fused_ordering(801) 00:12:55.745 fused_ordering(802) 00:12:55.745 fused_ordering(803) 00:12:55.745 fused_ordering(804) 00:12:55.745 fused_ordering(805) 00:12:55.745 fused_ordering(806) 00:12:55.745 fused_ordering(807) 00:12:55.745 fused_ordering(808) 00:12:55.745 fused_ordering(809) 00:12:55.745 fused_ordering(810) 00:12:55.745 fused_ordering(811) 00:12:55.745 fused_ordering(812) 00:12:55.745 fused_ordering(813) 00:12:55.745 fused_ordering(814) 00:12:55.745 fused_ordering(815) 00:12:55.745 fused_ordering(816) 00:12:55.745 fused_ordering(817) 00:12:55.745 fused_ordering(818) 00:12:55.745 fused_ordering(819) 00:12:55.745 fused_ordering(820) 00:12:56.679 fused_ordering(821) 00:12:56.679 fused_ordering(822) 00:12:56.679 fused_ordering(823) 00:12:56.679 fused_ordering(824) 00:12:56.679 fused_ordering(825) 00:12:56.679 fused_ordering(826) 00:12:56.679 fused_ordering(827) 00:12:56.679 fused_ordering(828) 00:12:56.679 fused_ordering(829) 00:12:56.679 fused_ordering(830) 00:12:56.679 fused_ordering(831) 00:12:56.679 fused_ordering(832) 00:12:56.679 fused_ordering(833) 00:12:56.679 fused_ordering(834) 00:12:56.679 fused_ordering(835) 00:12:56.679 fused_ordering(836) 00:12:56.679 fused_ordering(837) 00:12:56.679 fused_ordering(838) 00:12:56.679 fused_ordering(839) 00:12:56.679 fused_ordering(840) 00:12:56.679 fused_ordering(841) 00:12:56.679 fused_ordering(842) 00:12:56.679 fused_ordering(843) 00:12:56.679 fused_ordering(844) 00:12:56.679 fused_ordering(845) 00:12:56.679 fused_ordering(846) 00:12:56.679 fused_ordering(847) 00:12:56.679 fused_ordering(848) 00:12:56.679 fused_ordering(849) 00:12:56.679 fused_ordering(850) 00:12:56.679 fused_ordering(851) 00:12:56.679 fused_ordering(852) 00:12:56.679 fused_ordering(853) 00:12:56.679 fused_ordering(854) 00:12:56.679 fused_ordering(855) 00:12:56.679 fused_ordering(856) 00:12:56.679 fused_ordering(857) 00:12:56.679 fused_ordering(858) 00:12:56.679 fused_ordering(859) 00:12:56.679 fused_ordering(860) 00:12:56.679 fused_ordering(861) 00:12:56.679 fused_ordering(862) 00:12:56.679 fused_ordering(863) 00:12:56.679 fused_ordering(864) 00:12:56.679 fused_ordering(865) 00:12:56.679 fused_ordering(866) 00:12:56.679 fused_ordering(867) 00:12:56.679 fused_ordering(868) 00:12:56.679 fused_ordering(869) 00:12:56.679 fused_ordering(870) 00:12:56.679 fused_ordering(871) 00:12:56.679 fused_ordering(872) 00:12:56.679 fused_ordering(873) 00:12:56.679 fused_ordering(874) 00:12:56.679 fused_ordering(875) 00:12:56.679 fused_ordering(876) 00:12:56.679 fused_ordering(877) 00:12:56.679 fused_ordering(878) 00:12:56.679 fused_ordering(879) 00:12:56.679 fused_ordering(880) 00:12:56.679 fused_ordering(881) 00:12:56.679 fused_ordering(882) 00:12:56.679 fused_ordering(883) 00:12:56.679 fused_ordering(884) 00:12:56.679 fused_ordering(885) 00:12:56.679 fused_ordering(886) 00:12:56.679 fused_ordering(887) 00:12:56.679 fused_ordering(888) 00:12:56.679 fused_ordering(889) 00:12:56.679 fused_ordering(890) 00:12:56.679 fused_ordering(891) 00:12:56.679 fused_ordering(892) 00:12:56.679 fused_ordering(893) 00:12:56.679 fused_ordering(894) 00:12:56.679 fused_ordering(895) 00:12:56.679 fused_ordering(896) 00:12:56.679 fused_ordering(897) 00:12:56.679 fused_ordering(898) 00:12:56.679 fused_ordering(899) 00:12:56.679 fused_ordering(900) 00:12:56.679 fused_ordering(901) 00:12:56.679 fused_ordering(902) 00:12:56.679 fused_ordering(903) 00:12:56.679 fused_ordering(904) 00:12:56.679 fused_ordering(905) 00:12:56.679 fused_ordering(906) 00:12:56.679 fused_ordering(907) 00:12:56.679 fused_ordering(908) 00:12:56.679 fused_ordering(909) 00:12:56.679 fused_ordering(910) 00:12:56.679 fused_ordering(911) 00:12:56.679 fused_ordering(912) 00:12:56.679 fused_ordering(913) 00:12:56.679 fused_ordering(914) 00:12:56.679 fused_ordering(915) 00:12:56.679 fused_ordering(916) 00:12:56.679 fused_ordering(917) 00:12:56.679 fused_ordering(918) 00:12:56.679 fused_ordering(919) 00:12:56.679 fused_ordering(920) 00:12:56.679 fused_ordering(921) 00:12:56.679 fused_ordering(922) 00:12:56.679 fused_ordering(923) 00:12:56.679 fused_ordering(924) 00:12:56.679 fused_ordering(925) 00:12:56.679 fused_ordering(926) 00:12:56.679 fused_ordering(927) 00:12:56.679 fused_ordering(928) 00:12:56.679 fused_ordering(929) 00:12:56.679 fused_ordering(930) 00:12:56.679 fused_ordering(931) 00:12:56.679 fused_ordering(932) 00:12:56.679 fused_ordering(933) 00:12:56.679 fused_ordering(934) 00:12:56.679 fused_ordering(935) 00:12:56.679 fused_ordering(936) 00:12:56.679 fused_ordering(937) 00:12:56.679 fused_ordering(938) 00:12:56.679 fused_ordering(939) 00:12:56.679 fused_ordering(940) 00:12:56.679 fused_ordering(941) 00:12:56.679 fused_ordering(942) 00:12:56.679 fused_ordering(943) 00:12:56.679 fused_ordering(944) 00:12:56.679 fused_ordering(945) 00:12:56.679 fused_ordering(946) 00:12:56.679 fused_ordering(947) 00:12:56.679 fused_ordering(948) 00:12:56.679 fused_ordering(949) 00:12:56.679 fused_ordering(950) 00:12:56.679 fused_ordering(951) 00:12:56.679 fused_ordering(952) 00:12:56.679 fused_ordering(953) 00:12:56.679 fused_ordering(954) 00:12:56.679 fused_ordering(955) 00:12:56.679 fused_ordering(956) 00:12:56.679 fused_ordering(957) 00:12:56.679 fused_ordering(958) 00:12:56.679 fused_ordering(959) 00:12:56.679 fused_ordering(960) 00:12:56.679 fused_ordering(961) 00:12:56.679 fused_ordering(962) 00:12:56.679 fused_ordering(963) 00:12:56.679 fused_ordering(964) 00:12:56.679 fused_ordering(965) 00:12:56.679 fused_ordering(966) 00:12:56.679 fused_ordering(967) 00:12:56.679 fused_ordering(968) 00:12:56.679 fused_ordering(969) 00:12:56.679 fused_ordering(970) 00:12:56.679 fused_ordering(971) 00:12:56.679 fused_ordering(972) 00:12:56.679 fused_ordering(973) 00:12:56.679 fused_ordering(974) 00:12:56.679 fused_ordering(975) 00:12:56.679 fused_ordering(976) 00:12:56.679 fused_ordering(977) 00:12:56.679 fused_ordering(978) 00:12:56.679 fused_ordering(979) 00:12:56.679 fused_ordering(980) 00:12:56.679 fused_ordering(981) 00:12:56.679 fused_ordering(982) 00:12:56.679 fused_ordering(983) 00:12:56.679 fused_ordering(984) 00:12:56.679 fused_ordering(985) 00:12:56.679 fused_ordering(986) 00:12:56.679 fused_ordering(987) 00:12:56.679 fused_ordering(988) 00:12:56.679 fused_ordering(989) 00:12:56.679 fused_ordering(990) 00:12:56.679 fused_ordering(991) 00:12:56.679 fused_ordering(992) 00:12:56.679 fused_ordering(993) 00:12:56.679 fused_ordering(994) 00:12:56.679 fused_ordering(995) 00:12:56.679 fused_ordering(996) 00:12:56.679 fused_ordering(997) 00:12:56.679 fused_ordering(998) 00:12:56.679 fused_ordering(999) 00:12:56.679 fused_ordering(1000) 00:12:56.679 fused_ordering(1001) 00:12:56.679 fused_ordering(1002) 00:12:56.679 fused_ordering(1003) 00:12:56.679 fused_ordering(1004) 00:12:56.679 fused_ordering(1005) 00:12:56.679 fused_ordering(1006) 00:12:56.679 fused_ordering(1007) 00:12:56.679 fused_ordering(1008) 00:12:56.679 fused_ordering(1009) 00:12:56.679 fused_ordering(1010) 00:12:56.679 fused_ordering(1011) 00:12:56.679 fused_ordering(1012) 00:12:56.679 fused_ordering(1013) 00:12:56.679 fused_ordering(1014) 00:12:56.679 fused_ordering(1015) 00:12:56.679 fused_ordering(1016) 00:12:56.679 fused_ordering(1017) 00:12:56.679 fused_ordering(1018) 00:12:56.679 fused_ordering(1019) 00:12:56.679 fused_ordering(1020) 00:12:56.679 fused_ordering(1021) 00:12:56.679 fused_ordering(1022) 00:12:56.679 fused_ordering(1023) 00:12:56.679 06:08:02 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:56.679 06:08:02 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:56.679 06:08:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:56.679 06:08:02 -- nvmf/common.sh@116 -- # sync 00:12:56.679 06:08:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:56.679 06:08:02 -- nvmf/common.sh@119 -- # set +e 00:12:56.679 06:08:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:56.679 06:08:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:56.679 rmmod nvme_tcp 00:12:56.679 rmmod nvme_fabrics 00:12:56.679 rmmod nvme_keyring 00:12:56.679 06:08:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:56.679 06:08:03 -- nvmf/common.sh@123 -- # set -e 00:12:56.679 06:08:03 -- nvmf/common.sh@124 -- # return 0 00:12:56.679 06:08:03 -- nvmf/common.sh@477 -- # '[' -n 1082465 ']' 00:12:56.679 06:08:03 -- nvmf/common.sh@478 -- # killprocess 1082465 00:12:56.679 06:08:03 -- common/autotest_common.sh@926 -- # '[' -z 1082465 ']' 00:12:56.679 06:08:03 -- common/autotest_common.sh@930 -- # kill -0 1082465 00:12:56.679 06:08:03 -- common/autotest_common.sh@931 -- # uname 00:12:56.679 06:08:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:56.679 06:08:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1082465 00:12:56.679 06:08:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:56.679 06:08:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:56.679 06:08:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1082465' 00:12:56.679 killing process with pid 1082465 00:12:56.679 06:08:03 -- common/autotest_common.sh@945 -- # kill 1082465 00:12:56.679 06:08:03 -- common/autotest_common.sh@950 -- # wait 1082465 00:12:56.938 06:08:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:56.938 06:08:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:56.938 06:08:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:56.938 06:08:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:56.938 06:08:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:56.938 06:08:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.938 06:08:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:56.938 06:08:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.471 06:08:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:59.471 00:12:59.471 real 0m8.442s 00:12:59.471 user 0m6.281s 00:12:59.471 sys 0m3.505s 00:12:59.471 06:08:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:59.471 06:08:05 -- common/autotest_common.sh@10 -- # set +x 00:12:59.471 ************************************ 00:12:59.471 END TEST nvmf_fused_ordering 00:12:59.471 ************************************ 00:12:59.471 06:08:05 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:59.471 06:08:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:59.471 06:08:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:59.471 06:08:05 -- common/autotest_common.sh@10 -- # set +x 00:12:59.471 ************************************ 00:12:59.471 START TEST nvmf_delete_subsystem 00:12:59.471 ************************************ 00:12:59.471 06:08:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:59.471 * Looking for test storage... 00:12:59.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.471 06:08:05 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.471 06:08:05 -- nvmf/common.sh@7 -- # uname -s 00:12:59.471 06:08:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.471 06:08:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.471 06:08:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.471 06:08:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.471 06:08:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.471 06:08:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.471 06:08:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.471 06:08:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.471 06:08:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.471 06:08:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.471 06:08:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:59.471 06:08:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:59.471 06:08:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.471 06:08:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.471 06:08:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.471 06:08:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.471 06:08:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.471 06:08:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.471 06:08:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.471 06:08:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.471 06:08:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.471 06:08:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.471 06:08:05 -- paths/export.sh@5 -- # export PATH 00:12:59.471 06:08:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.471 06:08:05 -- nvmf/common.sh@46 -- # : 0 00:12:59.471 06:08:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:59.471 06:08:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:59.471 06:08:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:59.471 06:08:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.471 06:08:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.471 06:08:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:59.471 06:08:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:59.471 06:08:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:59.471 06:08:05 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:59.471 06:08:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:59.471 06:08:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.471 06:08:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:59.471 06:08:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:59.471 06:08:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:59.471 06:08:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.471 06:08:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:59.471 06:08:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.471 06:08:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:59.471 06:08:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:59.471 06:08:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:59.471 06:08:05 -- common/autotest_common.sh@10 -- # set +x 00:13:00.845 06:08:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:00.845 06:08:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:00.845 06:08:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:00.845 06:08:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:00.845 06:08:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:00.845 06:08:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:00.845 06:08:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:00.845 06:08:07 -- nvmf/common.sh@294 -- # net_devs=() 00:13:00.845 06:08:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:00.845 06:08:07 -- nvmf/common.sh@295 -- # e810=() 00:13:00.845 06:08:07 -- nvmf/common.sh@295 -- # local -ga e810 00:13:00.845 06:08:07 -- nvmf/common.sh@296 -- # x722=() 00:13:00.845 06:08:07 -- nvmf/common.sh@296 -- # local -ga x722 00:13:00.845 06:08:07 -- nvmf/common.sh@297 -- # mlx=() 00:13:00.845 06:08:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:00.845 06:08:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.845 06:08:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.845 06:08:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.845 06:08:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.845 06:08:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.845 06:08:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.845 06:08:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.845 06:08:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.845 06:08:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.845 06:08:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.845 06:08:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.845 06:08:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:00.845 06:08:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:00.845 06:08:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:00.845 06:08:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:00.845 06:08:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:00.845 06:08:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:00.845 06:08:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:00.845 06:08:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:00.845 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:00.845 06:08:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:00.845 06:08:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:00.845 06:08:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.845 06:08:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.845 06:08:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:00.845 06:08:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:00.845 06:08:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:00.845 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:00.845 06:08:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:00.845 06:08:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:00.845 06:08:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.845 06:08:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.845 06:08:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:00.845 06:08:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:00.845 06:08:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:00.845 06:08:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:00.845 06:08:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:00.845 06:08:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.845 06:08:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:00.845 06:08:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.845 06:08:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:00.845 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:00.845 06:08:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.845 06:08:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:00.845 06:08:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.845 06:08:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:00.845 06:08:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.845 06:08:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:00.845 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:00.845 06:08:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.845 06:08:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:00.845 06:08:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:00.845 06:08:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:00.845 06:08:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:00.845 06:08:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:00.845 06:08:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.845 06:08:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.845 06:08:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.845 06:08:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:00.845 06:08:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.845 06:08:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.845 06:08:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:00.845 06:08:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.845 06:08:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.845 06:08:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:00.845 06:08:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:00.845 06:08:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.845 06:08:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.103 06:08:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.103 06:08:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.103 06:08:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:01.103 06:08:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.103 06:08:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.103 06:08:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.103 06:08:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:01.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:13:01.103 00:13:01.103 --- 10.0.0.2 ping statistics --- 00:13:01.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.103 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:13:01.103 06:08:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:13:01.103 00:13:01.103 --- 10.0.0.1 ping statistics --- 00:13:01.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.103 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:13:01.103 06:08:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.103 06:08:07 -- nvmf/common.sh@410 -- # return 0 00:13:01.103 06:08:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:01.103 06:08:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.103 06:08:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:01.103 06:08:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:01.103 06:08:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.103 06:08:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:01.103 06:08:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:01.103 06:08:07 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:01.103 06:08:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:01.103 06:08:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:01.103 06:08:07 -- common/autotest_common.sh@10 -- # set +x 00:13:01.103 06:08:07 -- nvmf/common.sh@469 -- # nvmfpid=1084846 00:13:01.103 06:08:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:01.103 06:08:07 -- nvmf/common.sh@470 -- # waitforlisten 1084846 00:13:01.103 06:08:07 -- common/autotest_common.sh@819 -- # '[' -z 1084846 ']' 00:13:01.103 06:08:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.103 06:08:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:01.103 06:08:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.103 06:08:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:01.103 06:08:07 -- common/autotest_common.sh@10 -- # set +x 00:13:01.103 [2024-07-13 06:08:07.522380] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:01.103 [2024-07-13 06:08:07.522457] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.103 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.103 [2024-07-13 06:08:07.585401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:01.361 [2024-07-13 06:08:07.691984] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:01.361 [2024-07-13 06:08:07.692136] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.361 [2024-07-13 06:08:07.692153] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.361 [2024-07-13 06:08:07.692176] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.361 [2024-07-13 06:08:07.692233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.361 [2024-07-13 06:08:07.692238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.293 06:08:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:02.293 06:08:08 -- common/autotest_common.sh@852 -- # return 0 00:13:02.293 06:08:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:02.293 06:08:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:02.293 06:08:08 -- common/autotest_common.sh@10 -- # set +x 00:13:02.293 06:08:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.293 06:08:08 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:02.293 06:08:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.293 06:08:08 -- common/autotest_common.sh@10 -- # set +x 00:13:02.293 [2024-07-13 06:08:08.525551] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.293 06:08:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.293 06:08:08 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:02.293 06:08:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.293 06:08:08 -- common/autotest_common.sh@10 -- # set +x 00:13:02.293 06:08:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.293 06:08:08 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.293 06:08:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.293 06:08:08 -- common/autotest_common.sh@10 -- # set +x 00:13:02.293 [2024-07-13 06:08:08.541759] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.293 06:08:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.293 06:08:08 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:02.293 06:08:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.293 06:08:08 -- common/autotest_common.sh@10 -- # set +x 00:13:02.293 NULL1 00:13:02.293 06:08:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.293 06:08:08 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:02.293 06:08:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.293 06:08:08 -- common/autotest_common.sh@10 -- # set +x 00:13:02.293 Delay0 00:13:02.293 06:08:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.293 06:08:08 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.293 06:08:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:02.293 06:08:08 -- common/autotest_common.sh@10 -- # set +x 00:13:02.293 06:08:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:02.293 06:08:08 -- target/delete_subsystem.sh@28 -- # perf_pid=1085005 00:13:02.293 06:08:08 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:02.293 06:08:08 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:02.293 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.293 [2024-07-13 06:08:08.616462] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:04.193 06:08:10 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.193 06:08:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.193 06:08:10 -- common/autotest_common.sh@10 -- # set +x 00:13:04.451 Write completed with error (sct=0, sc=8) 00:13:04.451 Read completed with error (sct=0, sc=8) 00:13:04.451 Write completed with error (sct=0, sc=8) 00:13:04.451 Read completed with error (sct=0, sc=8) 00:13:04.451 starting I/O failed: -6 00:13:04.451 Write completed with error (sct=0, sc=8) 00:13:04.451 Read completed with error (sct=0, sc=8) 00:13:04.451 Read completed with error (sct=0, sc=8) 00:13:04.451 Read completed with error (sct=0, sc=8) 00:13:04.451 starting I/O failed: -6 00:13:04.451 Read completed with error (sct=0, sc=8) 00:13:04.451 Read completed with error (sct=0, sc=8) 00:13:04.451 Write completed with error (sct=0, sc=8) 00:13:04.451 Write completed with error (sct=0, sc=8) 00:13:04.451 starting I/O failed: -6 00:13:04.451 Write completed with error (sct=0, sc=8) 00:13:04.451 Read completed with error (sct=0, sc=8) 00:13:04.451 Write completed with error (sct=0, sc=8) 00:13:04.451 Read completed with error (sct=0, sc=8) 00:13:04.451 starting I/O failed: -6 00:13:04.451 Write completed with error (sct=0, sc=8) 00:13:04.451 Write completed with error (sct=0, sc=8) 00:13:04.451 Read completed with error (sct=0, sc=8) 00:13:04.451 Read completed with error (sct=0, sc=8) 00:13:04.451 starting I/O failed: -6 00:13:04.451 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 starting I/O failed: -6 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 starting I/O failed: -6 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 starting I/O failed: -6 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 starting I/O failed: -6 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 starting I/O failed: -6 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 starting I/O failed: -6 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 [2024-07-13 06:08:10.708584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f25d8000c00 is same with the state(5) to be set 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 starting I/O failed: -6 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 starting I/O failed: -6 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 starting I/O failed: -6 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 starting I/O failed: -6 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 starting I/O failed: -6 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 starting I/O failed: -6 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 starting I/O failed: -6 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 starting I/O failed: -6 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 starting I/O failed: -6 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 starting I/O failed: -6 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 starting I/O failed: -6 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 starting I/O failed: -6 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 [2024-07-13 06:08:10.709478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7aee10 is same with the state(5) to be set 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.452 Write completed with error (sct=0, sc=8) 00:13:04.452 Read completed with error (sct=0, sc=8) 00:13:04.453 Write completed with error (sct=0, sc=8) 00:13:04.453 Read completed with error (sct=0, sc=8) 00:13:04.453 Read completed with error (sct=0, sc=8) 00:13:04.453 Write completed with error (sct=0, sc=8) 00:13:04.453 Read completed with error (sct=0, sc=8) 00:13:04.453 Read completed with error (sct=0, sc=8) 00:13:04.453 Read completed with error (sct=0, sc=8) 00:13:04.453 Read completed with error (sct=0, sc=8) 00:13:04.453 Read completed with error (sct=0, sc=8) 00:13:04.453 Read completed with error (sct=0, sc=8) 00:13:05.387 [2024-07-13 06:08:11.674799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ce5a0 is same with the state(5) to be set 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 [2024-07-13 06:08:11.710664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f25d800bf20 is same with the state(5) to be set 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 [2024-07-13 06:08:11.710859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f25d800c480 is same with the state(5) to be set 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 [2024-07-13 06:08:11.711067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7aef90 is same with the state(5) to be set 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Read completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.387 Write completed with error (sct=0, sc=8) 00:13:05.388 Read completed with error (sct=0, sc=8) 00:13:05.388 Read completed with error (sct=0, sc=8) 00:13:05.388 Read completed with error (sct=0, sc=8) 00:13:05.388 Write completed with error (sct=0, sc=8) 00:13:05.388 Read completed with error (sct=0, sc=8) 00:13:05.388 Write completed with error (sct=0, sc=8) 00:13:05.388 Write completed with error (sct=0, sc=8) 00:13:05.388 Read completed with error (sct=0, sc=8) 00:13:05.388 Write completed with error (sct=0, sc=8) 00:13:05.388 Write completed with error (sct=0, sc=8) 00:13:05.388 Read completed with error (sct=0, sc=8) 00:13:05.388 Read completed with error (sct=0, sc=8) 00:13:05.388 Read completed with error (sct=0, sc=8) 00:13:05.388 Read completed with error (sct=0, sc=8) 00:13:05.388 Read completed with error (sct=0, sc=8) 00:13:05.388 Read completed with error (sct=0, sc=8) 00:13:05.388 [2024-07-13 06:08:11.711571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7afc60 is same with the state(5) to be set 00:13:05.388 [2024-07-13 06:08:11.712413] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ce5a0 (9): Bad file descriptor 00:13:05.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:05.388 06:08:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.388 06:08:11 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:05.388 06:08:11 -- target/delete_subsystem.sh@35 -- # kill -0 1085005 00:13:05.388 06:08:11 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:05.388 Initializing NVMe Controllers 00:13:05.388 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:05.388 Controller IO queue size 128, less than required. 00:13:05.388 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:05.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:05.388 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:05.388 Initialization complete. Launching workers. 00:13:05.388 ======================================================== 00:13:05.388 Latency(us) 00:13:05.388 Device Information : IOPS MiB/s Average min max 00:13:05.388 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.53 0.09 882452.86 431.60 1014693.24 00:13:05.388 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 173.06 0.08 888451.74 758.85 1014133.04 00:13:05.388 ======================================================== 00:13:05.388 Total : 349.59 0.17 885422.52 431.60 1014693.24 00:13:05.388 00:13:05.963 06:08:12 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:05.963 06:08:12 -- target/delete_subsystem.sh@35 -- # kill -0 1085005 00:13:05.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1085005) - No such process 00:13:05.963 06:08:12 -- target/delete_subsystem.sh@45 -- # NOT wait 1085005 00:13:05.963 06:08:12 -- common/autotest_common.sh@640 -- # local es=0 00:13:05.963 06:08:12 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 1085005 00:13:05.964 06:08:12 -- common/autotest_common.sh@628 -- # local arg=wait 00:13:05.964 06:08:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:05.964 06:08:12 -- common/autotest_common.sh@632 -- # type -t wait 00:13:05.964 06:08:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:05.964 06:08:12 -- common/autotest_common.sh@643 -- # wait 1085005 00:13:05.964 06:08:12 -- common/autotest_common.sh@643 -- # es=1 00:13:05.964 06:08:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:05.964 06:08:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:05.964 06:08:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:05.964 06:08:12 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:05.964 06:08:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.964 06:08:12 -- common/autotest_common.sh@10 -- # set +x 00:13:05.964 06:08:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.964 06:08:12 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.964 06:08:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.964 06:08:12 -- common/autotest_common.sh@10 -- # set +x 00:13:05.964 [2024-07-13 06:08:12.231002] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.964 06:08:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.964 06:08:12 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.964 06:08:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:05.964 06:08:12 -- common/autotest_common.sh@10 -- # set +x 00:13:05.964 06:08:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:05.964 06:08:12 -- target/delete_subsystem.sh@54 -- # perf_pid=1085544 00:13:05.964 06:08:12 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:05.964 06:08:12 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:05.964 06:08:12 -- target/delete_subsystem.sh@57 -- # kill -0 1085544 00:13:05.964 06:08:12 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:05.964 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.964 [2024-07-13 06:08:12.288217] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:06.531 06:08:12 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:06.531 06:08:12 -- target/delete_subsystem.sh@57 -- # kill -0 1085544 00:13:06.531 06:08:12 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:06.789 06:08:13 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:06.789 06:08:13 -- target/delete_subsystem.sh@57 -- # kill -0 1085544 00:13:06.789 06:08:13 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:07.353 06:08:13 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:07.353 06:08:13 -- target/delete_subsystem.sh@57 -- # kill -0 1085544 00:13:07.353 06:08:13 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:07.942 06:08:14 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:07.943 06:08:14 -- target/delete_subsystem.sh@57 -- # kill -0 1085544 00:13:07.943 06:08:14 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:08.506 06:08:14 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:08.506 06:08:14 -- target/delete_subsystem.sh@57 -- # kill -0 1085544 00:13:08.506 06:08:14 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:08.762 06:08:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:08.762 06:08:15 -- target/delete_subsystem.sh@57 -- # kill -0 1085544 00:13:08.762 06:08:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:09.019 Initializing NVMe Controllers 00:13:09.019 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:09.019 Controller IO queue size 128, less than required. 00:13:09.019 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:09.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:09.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:09.019 Initialization complete. Launching workers. 00:13:09.019 ======================================================== 00:13:09.019 Latency(us) 00:13:09.019 Device Information : IOPS MiB/s Average min max 00:13:09.020 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004369.03 1000196.07 1042891.62 00:13:09.020 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004712.76 1000227.10 1013212.45 00:13:09.020 ======================================================== 00:13:09.020 Total : 256.00 0.12 1004540.89 1000196.07 1042891.62 00:13:09.020 00:13:09.276 06:08:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:09.276 06:08:15 -- target/delete_subsystem.sh@57 -- # kill -0 1085544 00:13:09.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1085544) - No such process 00:13:09.276 06:08:15 -- target/delete_subsystem.sh@67 -- # wait 1085544 00:13:09.276 06:08:15 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:09.276 06:08:15 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:09.276 06:08:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:09.276 06:08:15 -- nvmf/common.sh@116 -- # sync 00:13:09.276 06:08:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:09.276 06:08:15 -- nvmf/common.sh@119 -- # set +e 00:13:09.276 06:08:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:09.276 06:08:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:09.276 rmmod nvme_tcp 00:13:09.532 rmmod nvme_fabrics 00:13:09.532 rmmod nvme_keyring 00:13:09.532 06:08:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:09.532 06:08:15 -- nvmf/common.sh@123 -- # set -e 00:13:09.532 06:08:15 -- nvmf/common.sh@124 -- # return 0 00:13:09.532 06:08:15 -- nvmf/common.sh@477 -- # '[' -n 1084846 ']' 00:13:09.532 06:08:15 -- nvmf/common.sh@478 -- # killprocess 1084846 00:13:09.532 06:08:15 -- common/autotest_common.sh@926 -- # '[' -z 1084846 ']' 00:13:09.532 06:08:15 -- common/autotest_common.sh@930 -- # kill -0 1084846 00:13:09.532 06:08:15 -- common/autotest_common.sh@931 -- # uname 00:13:09.532 06:08:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:09.532 06:08:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1084846 00:13:09.532 06:08:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:09.532 06:08:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:09.532 06:08:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1084846' 00:13:09.532 killing process with pid 1084846 00:13:09.532 06:08:15 -- common/autotest_common.sh@945 -- # kill 1084846 00:13:09.532 06:08:15 -- common/autotest_common.sh@950 -- # wait 1084846 00:13:09.790 06:08:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:09.790 06:08:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:09.790 06:08:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:09.790 06:08:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:09.790 06:08:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:09.790 06:08:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.790 06:08:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.790 06:08:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.690 06:08:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:11.690 00:13:11.690 real 0m12.751s 00:13:11.690 user 0m29.114s 00:13:11.690 sys 0m2.856s 00:13:11.690 06:08:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.690 06:08:18 -- common/autotest_common.sh@10 -- # set +x 00:13:11.690 ************************************ 00:13:11.690 END TEST nvmf_delete_subsystem 00:13:11.690 ************************************ 00:13:11.690 06:08:18 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:13:11.690 06:08:18 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:11.690 06:08:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:11.690 06:08:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:11.690 06:08:18 -- common/autotest_common.sh@10 -- # set +x 00:13:11.690 ************************************ 00:13:11.690 START TEST nvmf_nvme_cli 00:13:11.690 ************************************ 00:13:11.690 06:08:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:11.948 * Looking for test storage... 00:13:11.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.948 06:08:18 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.948 06:08:18 -- nvmf/common.sh@7 -- # uname -s 00:13:11.948 06:08:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.948 06:08:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.948 06:08:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.948 06:08:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.948 06:08:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.948 06:08:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.948 06:08:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.948 06:08:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.948 06:08:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.948 06:08:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.949 06:08:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:11.949 06:08:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:11.949 06:08:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.949 06:08:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.949 06:08:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.949 06:08:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.949 06:08:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.949 06:08:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.949 06:08:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.949 06:08:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.949 06:08:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.949 06:08:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.949 06:08:18 -- paths/export.sh@5 -- # export PATH 00:13:11.949 06:08:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.949 06:08:18 -- nvmf/common.sh@46 -- # : 0 00:13:11.949 06:08:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:11.949 06:08:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:11.949 06:08:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:11.949 06:08:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.949 06:08:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.949 06:08:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:11.949 06:08:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:11.949 06:08:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:11.949 06:08:18 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:11.949 06:08:18 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:11.949 06:08:18 -- target/nvme_cli.sh@14 -- # devs=() 00:13:11.949 06:08:18 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:11.949 06:08:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:11.949 06:08:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.949 06:08:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:11.949 06:08:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:11.949 06:08:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:11.949 06:08:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.949 06:08:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.949 06:08:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.949 06:08:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:11.949 06:08:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:11.949 06:08:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:11.949 06:08:18 -- common/autotest_common.sh@10 -- # set +x 00:13:13.849 06:08:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:13.849 06:08:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:13.849 06:08:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:13.850 06:08:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:13.850 06:08:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:13.850 06:08:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:13.850 06:08:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:13.850 06:08:20 -- nvmf/common.sh@294 -- # net_devs=() 00:13:13.850 06:08:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:13.850 06:08:20 -- nvmf/common.sh@295 -- # e810=() 00:13:13.850 06:08:20 -- nvmf/common.sh@295 -- # local -ga e810 00:13:13.850 06:08:20 -- nvmf/common.sh@296 -- # x722=() 00:13:13.850 06:08:20 -- nvmf/common.sh@296 -- # local -ga x722 00:13:13.850 06:08:20 -- nvmf/common.sh@297 -- # mlx=() 00:13:13.850 06:08:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:13.850 06:08:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:13.850 06:08:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:13.850 06:08:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:13.850 06:08:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:13.850 06:08:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:13.850 06:08:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:13.850 06:08:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:13.850 06:08:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:13.850 06:08:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:13.850 06:08:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:13.850 06:08:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:13.850 06:08:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:13.850 06:08:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:13.850 06:08:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:13.850 06:08:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:13.850 06:08:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:13.850 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:13.850 06:08:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:13.850 06:08:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:13.850 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:13.850 06:08:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:13.850 06:08:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:13.850 06:08:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.850 06:08:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:13.850 06:08:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.850 06:08:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:13.850 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:13.850 06:08:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.850 06:08:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:13.850 06:08:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.850 06:08:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:13.850 06:08:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.850 06:08:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:13.850 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:13.850 06:08:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.850 06:08:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:13.850 06:08:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:13.850 06:08:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:13.850 06:08:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.850 06:08:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.850 06:08:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:13.850 06:08:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:13.850 06:08:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:13.850 06:08:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:13.850 06:08:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:13.850 06:08:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:13.850 06:08:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.850 06:08:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:13.850 06:08:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:13.850 06:08:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:13.850 06:08:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:13.850 06:08:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:13.850 06:08:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:13.850 06:08:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:13.850 06:08:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:13.850 06:08:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:13.850 06:08:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:13.850 06:08:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:13.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:13:13.850 00:13:13.850 --- 10.0.0.2 ping statistics --- 00:13:13.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.850 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:13:13.850 06:08:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:13.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:13:13.850 00:13:13.850 --- 10.0.0.1 ping statistics --- 00:13:13.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.850 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:13:13.850 06:08:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.850 06:08:20 -- nvmf/common.sh@410 -- # return 0 00:13:13.850 06:08:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:13.850 06:08:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.850 06:08:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:13.850 06:08:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.850 06:08:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:13.850 06:08:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:13.850 06:08:20 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:13.850 06:08:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:13.850 06:08:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:13.851 06:08:20 -- common/autotest_common.sh@10 -- # set +x 00:13:13.851 06:08:20 -- nvmf/common.sh@469 -- # nvmfpid=1087909 00:13:13.851 06:08:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:13.851 06:08:20 -- nvmf/common.sh@470 -- # waitforlisten 1087909 00:13:13.851 06:08:20 -- common/autotest_common.sh@819 -- # '[' -z 1087909 ']' 00:13:13.851 06:08:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.851 06:08:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:13.851 06:08:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.851 06:08:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:13.851 06:08:20 -- common/autotest_common.sh@10 -- # set +x 00:13:14.109 [2024-07-13 06:08:20.393420] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:14.109 [2024-07-13 06:08:20.393515] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.109 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.109 [2024-07-13 06:08:20.462472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.109 [2024-07-13 06:08:20.578953] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:14.109 [2024-07-13 06:08:20.579123] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.109 [2024-07-13 06:08:20.579144] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.109 [2024-07-13 06:08:20.579159] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.109 [2024-07-13 06:08:20.579258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.109 [2024-07-13 06:08:20.579315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.109 [2024-07-13 06:08:20.579403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:14.109 [2024-07-13 06:08:20.579406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.041 06:08:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:15.041 06:08:21 -- common/autotest_common.sh@852 -- # return 0 00:13:15.041 06:08:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:15.041 06:08:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:15.041 06:08:21 -- common/autotest_common.sh@10 -- # set +x 00:13:15.041 06:08:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.041 06:08:21 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:15.041 06:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.041 06:08:21 -- common/autotest_common.sh@10 -- # set +x 00:13:15.041 [2024-07-13 06:08:21.359351] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:15.041 06:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.041 06:08:21 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:15.041 06:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.041 06:08:21 -- common/autotest_common.sh@10 -- # set +x 00:13:15.041 Malloc0 00:13:15.041 06:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.041 06:08:21 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:15.041 06:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.041 06:08:21 -- common/autotest_common.sh@10 -- # set +x 00:13:15.041 Malloc1 00:13:15.041 06:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.041 06:08:21 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:15.041 06:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.041 06:08:21 -- common/autotest_common.sh@10 -- # set +x 00:13:15.041 06:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.041 06:08:21 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:15.041 06:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.041 06:08:21 -- common/autotest_common.sh@10 -- # set +x 00:13:15.041 06:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.041 06:08:21 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.041 06:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.041 06:08:21 -- common/autotest_common.sh@10 -- # set +x 00:13:15.041 06:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.041 06:08:21 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.041 06:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.041 06:08:21 -- common/autotest_common.sh@10 -- # set +x 00:13:15.041 [2024-07-13 06:08:21.441891] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.041 06:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.041 06:08:21 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:15.041 06:08:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.041 06:08:21 -- common/autotest_common.sh@10 -- # set +x 00:13:15.041 06:08:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.041 06:08:21 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:13:15.041 00:13:15.041 Discovery Log Number of Records 2, Generation counter 2 00:13:15.041 =====Discovery Log Entry 0====== 00:13:15.041 trtype: tcp 00:13:15.041 adrfam: ipv4 00:13:15.041 subtype: current discovery subsystem 00:13:15.041 treq: not required 00:13:15.041 portid: 0 00:13:15.041 trsvcid: 4420 00:13:15.041 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:15.041 traddr: 10.0.0.2 00:13:15.041 eflags: explicit discovery connections, duplicate discovery information 00:13:15.041 sectype: none 00:13:15.041 =====Discovery Log Entry 1====== 00:13:15.041 trtype: tcp 00:13:15.041 adrfam: ipv4 00:13:15.041 subtype: nvme subsystem 00:13:15.041 treq: not required 00:13:15.041 portid: 0 00:13:15.041 trsvcid: 4420 00:13:15.041 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:15.041 traddr: 10.0.0.2 00:13:15.041 eflags: none 00:13:15.041 sectype: none 00:13:15.041 06:08:21 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:15.041 06:08:21 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:15.041 06:08:21 -- nvmf/common.sh@510 -- # local dev _ 00:13:15.041 06:08:21 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:15.041 06:08:21 -- nvmf/common.sh@509 -- # nvme list 00:13:15.041 06:08:21 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:13:15.041 06:08:21 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:15.041 06:08:21 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:13:15.041 06:08:21 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:15.041 06:08:21 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:15.041 06:08:21 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.606 06:08:22 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:15.606 06:08:22 -- common/autotest_common.sh@1177 -- # local i=0 00:13:15.606 06:08:22 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.606 06:08:22 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:13:15.863 06:08:22 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:13:15.863 06:08:22 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:17.762 06:08:24 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:17.762 06:08:24 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:17.762 06:08:24 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.762 06:08:24 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:13:17.762 06:08:24 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.762 06:08:24 -- common/autotest_common.sh@1187 -- # return 0 00:13:17.762 06:08:24 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:17.762 06:08:24 -- nvmf/common.sh@510 -- # local dev _ 00:13:17.762 06:08:24 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:17.762 06:08:24 -- nvmf/common.sh@509 -- # nvme list 00:13:17.762 06:08:24 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:13:17.762 06:08:24 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:17.762 06:08:24 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:13:17.762 06:08:24 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:17.762 06:08:24 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:17.762 06:08:24 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:13:17.762 06:08:24 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:17.762 06:08:24 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:17.762 06:08:24 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:13:17.762 06:08:24 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:17.762 06:08:24 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:17.762 /dev/nvme0n1 ]] 00:13:17.762 06:08:24 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:17.762 06:08:24 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:17.762 06:08:24 -- nvmf/common.sh@510 -- # local dev _ 00:13:17.762 06:08:24 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:17.762 06:08:24 -- nvmf/common.sh@509 -- # nvme list 00:13:17.762 06:08:24 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:13:17.762 06:08:24 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:17.762 06:08:24 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:13:17.762 06:08:24 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:17.762 06:08:24 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:17.762 06:08:24 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:13:17.762 06:08:24 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:17.762 06:08:24 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:17.762 06:08:24 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:13:17.762 06:08:24 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:17.762 06:08:24 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:17.762 06:08:24 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.762 06:08:24 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.762 06:08:24 -- common/autotest_common.sh@1198 -- # local i=0 00:13:17.762 06:08:24 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:17.762 06:08:24 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.762 06:08:24 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:17.762 06:08:24 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.762 06:08:24 -- common/autotest_common.sh@1210 -- # return 0 00:13:17.762 06:08:24 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:17.762 06:08:24 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.762 06:08:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:17.762 06:08:24 -- common/autotest_common.sh@10 -- # set +x 00:13:17.762 06:08:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:17.762 06:08:24 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:17.762 06:08:24 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:17.762 06:08:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:17.762 06:08:24 -- nvmf/common.sh@116 -- # sync 00:13:17.762 06:08:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:17.762 06:08:24 -- nvmf/common.sh@119 -- # set +e 00:13:17.762 06:08:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:17.762 06:08:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:17.762 rmmod nvme_tcp 00:13:17.762 rmmod nvme_fabrics 00:13:18.020 rmmod nvme_keyring 00:13:18.020 06:08:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:18.020 06:08:24 -- nvmf/common.sh@123 -- # set -e 00:13:18.020 06:08:24 -- nvmf/common.sh@124 -- # return 0 00:13:18.020 06:08:24 -- nvmf/common.sh@477 -- # '[' -n 1087909 ']' 00:13:18.020 06:08:24 -- nvmf/common.sh@478 -- # killprocess 1087909 00:13:18.020 06:08:24 -- common/autotest_common.sh@926 -- # '[' -z 1087909 ']' 00:13:18.020 06:08:24 -- common/autotest_common.sh@930 -- # kill -0 1087909 00:13:18.020 06:08:24 -- common/autotest_common.sh@931 -- # uname 00:13:18.020 06:08:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:18.020 06:08:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1087909 00:13:18.020 06:08:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:18.020 06:08:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:18.020 06:08:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1087909' 00:13:18.020 killing process with pid 1087909 00:13:18.020 06:08:24 -- common/autotest_common.sh@945 -- # kill 1087909 00:13:18.020 06:08:24 -- common/autotest_common.sh@950 -- # wait 1087909 00:13:18.279 06:08:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:18.279 06:08:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:18.279 06:08:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:18.279 06:08:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:18.279 06:08:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:18.279 06:08:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.279 06:08:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.279 06:08:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.196 06:08:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:20.196 00:13:20.196 real 0m8.495s 00:13:20.196 user 0m16.626s 00:13:20.196 sys 0m2.185s 00:13:20.196 06:08:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:20.196 06:08:26 -- common/autotest_common.sh@10 -- # set +x 00:13:20.196 ************************************ 00:13:20.196 END TEST nvmf_nvme_cli 00:13:20.196 ************************************ 00:13:20.196 06:08:26 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:13:20.197 06:08:26 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:20.197 06:08:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:20.197 06:08:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:20.197 06:08:26 -- common/autotest_common.sh@10 -- # set +x 00:13:20.197 ************************************ 00:13:20.197 START TEST nvmf_host_management 00:13:20.197 ************************************ 00:13:20.197 06:08:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:20.456 * Looking for test storage... 00:13:20.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.456 06:08:26 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.456 06:08:26 -- nvmf/common.sh@7 -- # uname -s 00:13:20.456 06:08:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.456 06:08:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.456 06:08:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.456 06:08:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.456 06:08:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.456 06:08:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.456 06:08:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.456 06:08:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.456 06:08:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.456 06:08:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.456 06:08:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:20.456 06:08:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:20.456 06:08:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.456 06:08:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.456 06:08:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.456 06:08:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.456 06:08:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.456 06:08:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.456 06:08:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.456 06:08:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.456 06:08:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.457 06:08:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.457 06:08:26 -- paths/export.sh@5 -- # export PATH 00:13:20.457 06:08:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.457 06:08:26 -- nvmf/common.sh@46 -- # : 0 00:13:20.457 06:08:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:20.457 06:08:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:20.457 06:08:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:20.457 06:08:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.457 06:08:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.457 06:08:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:20.457 06:08:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:20.457 06:08:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:20.457 06:08:26 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:20.457 06:08:26 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:20.457 06:08:26 -- target/host_management.sh@104 -- # nvmftestinit 00:13:20.457 06:08:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:20.457 06:08:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.457 06:08:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:20.457 06:08:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:20.457 06:08:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:20.457 06:08:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.457 06:08:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.457 06:08:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.457 06:08:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:20.457 06:08:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:20.457 06:08:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:20.457 06:08:26 -- common/autotest_common.sh@10 -- # set +x 00:13:22.363 06:08:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:22.363 06:08:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:22.363 06:08:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:22.363 06:08:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:22.363 06:08:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:22.363 06:08:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:22.363 06:08:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:22.363 06:08:28 -- nvmf/common.sh@294 -- # net_devs=() 00:13:22.363 06:08:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:22.363 06:08:28 -- nvmf/common.sh@295 -- # e810=() 00:13:22.363 06:08:28 -- nvmf/common.sh@295 -- # local -ga e810 00:13:22.363 06:08:28 -- nvmf/common.sh@296 -- # x722=() 00:13:22.363 06:08:28 -- nvmf/common.sh@296 -- # local -ga x722 00:13:22.363 06:08:28 -- nvmf/common.sh@297 -- # mlx=() 00:13:22.363 06:08:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:22.363 06:08:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.363 06:08:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.363 06:08:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.363 06:08:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.363 06:08:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.363 06:08:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.363 06:08:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.363 06:08:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.363 06:08:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.363 06:08:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.363 06:08:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.363 06:08:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:22.363 06:08:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:22.363 06:08:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:22.363 06:08:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:22.363 06:08:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:22.363 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:22.363 06:08:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:22.363 06:08:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:22.363 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:22.363 06:08:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:22.363 06:08:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:22.363 06:08:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.363 06:08:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:22.363 06:08:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.363 06:08:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:22.363 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:22.363 06:08:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.363 06:08:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:22.363 06:08:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.363 06:08:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:22.363 06:08:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.363 06:08:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:22.363 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:22.363 06:08:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.363 06:08:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:22.363 06:08:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:22.363 06:08:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:22.363 06:08:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.363 06:08:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.363 06:08:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.363 06:08:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:22.363 06:08:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.363 06:08:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.363 06:08:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:22.363 06:08:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.363 06:08:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.363 06:08:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:22.363 06:08:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:22.363 06:08:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.363 06:08:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.363 06:08:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.363 06:08:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.363 06:08:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:22.363 06:08:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.363 06:08:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.363 06:08:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.363 06:08:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:22.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:13:22.363 00:13:22.363 --- 10.0.0.2 ping statistics --- 00:13:22.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.363 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:13:22.363 06:08:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:13:22.363 00:13:22.363 --- 10.0.0.1 ping statistics --- 00:13:22.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.363 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:13:22.363 06:08:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.363 06:08:28 -- nvmf/common.sh@410 -- # return 0 00:13:22.363 06:08:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:22.363 06:08:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.363 06:08:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:22.363 06:08:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.363 06:08:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:22.363 06:08:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:22.363 06:08:28 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:13:22.363 06:08:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:22.363 06:08:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:22.363 06:08:28 -- common/autotest_common.sh@10 -- # set +x 00:13:22.363 ************************************ 00:13:22.363 START TEST nvmf_host_management 00:13:22.363 ************************************ 00:13:22.363 06:08:28 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:13:22.363 06:08:28 -- target/host_management.sh@69 -- # starttarget 00:13:22.363 06:08:28 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:22.363 06:08:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:22.363 06:08:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:22.363 06:08:28 -- common/autotest_common.sh@10 -- # set +x 00:13:22.363 06:08:28 -- nvmf/common.sh@469 -- # nvmfpid=1090317 00:13:22.363 06:08:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:22.363 06:08:28 -- nvmf/common.sh@470 -- # waitforlisten 1090317 00:13:22.363 06:08:28 -- common/autotest_common.sh@819 -- # '[' -z 1090317 ']' 00:13:22.363 06:08:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.363 06:08:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:22.363 06:08:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.363 06:08:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:22.363 06:08:28 -- common/autotest_common.sh@10 -- # set +x 00:13:22.621 [2024-07-13 06:08:28.909328] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:22.621 [2024-07-13 06:08:28.909399] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.621 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.621 [2024-07-13 06:08:28.988347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.621 [2024-07-13 06:08:29.128066] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:22.621 [2024-07-13 06:08:29.128258] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.621 [2024-07-13 06:08:29.128298] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.621 [2024-07-13 06:08:29.128321] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.621 [2024-07-13 06:08:29.128430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.621 [2024-07-13 06:08:29.128549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.621 [2024-07-13 06:08:29.128616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:22.621 [2024-07-13 06:08:29.128626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.553 06:08:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:23.553 06:08:30 -- common/autotest_common.sh@852 -- # return 0 00:13:23.553 06:08:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:23.553 06:08:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:23.553 06:08:30 -- common/autotest_common.sh@10 -- # set +x 00:13:23.553 06:08:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.553 06:08:30 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:23.553 06:08:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.553 06:08:30 -- common/autotest_common.sh@10 -- # set +x 00:13:23.553 [2024-07-13 06:08:30.034764] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.553 06:08:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.553 06:08:30 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:23.553 06:08:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:23.553 06:08:30 -- common/autotest_common.sh@10 -- # set +x 00:13:23.553 06:08:30 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:23.553 06:08:30 -- target/host_management.sh@23 -- # cat 00:13:23.553 06:08:30 -- target/host_management.sh@30 -- # rpc_cmd 00:13:23.553 06:08:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.553 06:08:30 -- common/autotest_common.sh@10 -- # set +x 00:13:23.810 Malloc0 00:13:23.810 [2024-07-13 06:08:30.094434] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.810 06:08:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.810 06:08:30 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:23.810 06:08:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:23.810 06:08:30 -- common/autotest_common.sh@10 -- # set +x 00:13:23.811 06:08:30 -- target/host_management.sh@73 -- # perfpid=1090504 00:13:23.811 06:08:30 -- target/host_management.sh@74 -- # waitforlisten 1090504 /var/tmp/bdevperf.sock 00:13:23.811 06:08:30 -- common/autotest_common.sh@819 -- # '[' -z 1090504 ']' 00:13:23.811 06:08:30 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:23.811 06:08:30 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:23.811 06:08:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:23.811 06:08:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:23.811 06:08:30 -- nvmf/common.sh@520 -- # config=() 00:13:23.811 06:08:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:23.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:23.811 06:08:30 -- nvmf/common.sh@520 -- # local subsystem config 00:13:23.811 06:08:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:23.811 06:08:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:23.811 06:08:30 -- common/autotest_common.sh@10 -- # set +x 00:13:23.811 06:08:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:23.811 { 00:13:23.811 "params": { 00:13:23.811 "name": "Nvme$subsystem", 00:13:23.811 "trtype": "$TEST_TRANSPORT", 00:13:23.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:23.811 "adrfam": "ipv4", 00:13:23.811 "trsvcid": "$NVMF_PORT", 00:13:23.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:23.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:23.811 "hdgst": ${hdgst:-false}, 00:13:23.811 "ddgst": ${ddgst:-false} 00:13:23.811 }, 00:13:23.811 "method": "bdev_nvme_attach_controller" 00:13:23.811 } 00:13:23.811 EOF 00:13:23.811 )") 00:13:23.811 06:08:30 -- nvmf/common.sh@542 -- # cat 00:13:23.811 06:08:30 -- nvmf/common.sh@544 -- # jq . 00:13:23.811 06:08:30 -- nvmf/common.sh@545 -- # IFS=, 00:13:23.811 06:08:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:23.811 "params": { 00:13:23.811 "name": "Nvme0", 00:13:23.811 "trtype": "tcp", 00:13:23.811 "traddr": "10.0.0.2", 00:13:23.811 "adrfam": "ipv4", 00:13:23.811 "trsvcid": "4420", 00:13:23.811 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:23.811 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:23.811 "hdgst": false, 00:13:23.811 "ddgst": false 00:13:23.811 }, 00:13:23.811 "method": "bdev_nvme_attach_controller" 00:13:23.811 }' 00:13:23.811 [2024-07-13 06:08:30.161073] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:23.811 [2024-07-13 06:08:30.161151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090504 ] 00:13:23.811 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.811 [2024-07-13 06:08:30.225544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.068 [2024-07-13 06:08:30.335584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.326 Running I/O for 10 seconds... 00:13:24.894 06:08:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:24.894 06:08:31 -- common/autotest_common.sh@852 -- # return 0 00:13:24.894 06:08:31 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:24.894 06:08:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.894 06:08:31 -- common/autotest_common.sh@10 -- # set +x 00:13:24.894 06:08:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.894 06:08:31 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:24.894 06:08:31 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:24.894 06:08:31 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:24.894 06:08:31 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:24.894 06:08:31 -- target/host_management.sh@52 -- # local ret=1 00:13:24.894 06:08:31 -- target/host_management.sh@53 -- # local i 00:13:24.894 06:08:31 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:24.894 06:08:31 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:24.894 06:08:31 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:24.894 06:08:31 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:24.894 06:08:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.894 06:08:31 -- common/autotest_common.sh@10 -- # set +x 00:13:24.894 06:08:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.894 06:08:31 -- target/host_management.sh@55 -- # read_io_count=1647 00:13:24.894 06:08:31 -- target/host_management.sh@58 -- # '[' 1647 -ge 100 ']' 00:13:24.894 06:08:31 -- target/host_management.sh@59 -- # ret=0 00:13:24.894 06:08:31 -- target/host_management.sh@60 -- # break 00:13:24.894 06:08:31 -- target/host_management.sh@64 -- # return 0 00:13:24.894 06:08:31 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:24.894 06:08:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.894 06:08:31 -- common/autotest_common.sh@10 -- # set +x 00:13:24.894 [2024-07-13 06:08:31.170273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170444] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.894 [2024-07-13 06:08:31.170864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.895 [2024-07-13 06:08:31.170887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.895 [2024-07-13 06:08:31.170899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.895 [2024-07-13 06:08:31.170911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.895 [2024-07-13 06:08:31.170923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.895 [2024-07-13 06:08:31.170935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.895 [2024-07-13 06:08:31.170946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.895 [2024-07-13 06:08:31.170958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.895 [2024-07-13 06:08:31.170974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.895 [2024-07-13 06:08:31.170986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.895 [2024-07-13 06:08:31.170998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.895 [2024-07-13 06:08:31.171010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.895 [2024-07-13 06:08:31.171022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.895 [2024-07-13 06:08:31.171034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.895 [2024-07-13 06:08:31.171046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.895 [2024-07-13 06:08:31.171057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefb480 is same with the state(5) to be set 00:13:24.895 [2024-07-13 06:08:31.171445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.171487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.171516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.171533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.171551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.171565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.171581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.171596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.171611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.171626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.171641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.171656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.171671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.171686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.171701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.171716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.171731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.171746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.171767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.171783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.171798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.171812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.171828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.171842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.171857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.171880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.171897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.171912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.171927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.171942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.171957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.171972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.171989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.172019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.172049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.172079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.172109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.172138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.172189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.172219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.172247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.172276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.172304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.172332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.172361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.172389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.172417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.172446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.172476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.172505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.895 [2024-07-13 06:08:31.172537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.895 [2024-07-13 06:08:31.172551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.172566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.172580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.172595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.172609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.172624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.172638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.172654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.172669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.172684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.172699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.172714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.172729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.172744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.172758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.172773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.172787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.172802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.172816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.172830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.172844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.172859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.172898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.172917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.172935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.172952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.172966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.172982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.172996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.173012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.173027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.173042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.173056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.173071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.173085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.173101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.173115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.173130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.173144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.173159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.173189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.173204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.173218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.173233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.173247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.173262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.173275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.173290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.173304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.173322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.173337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.173352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.173367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.173382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.173395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.173410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.173424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.173440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:24.896 [2024-07-13 06:08:31.173454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.173543] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xae3b50 was disconnected and freed. reset controller. 00:13:24.896 06:08:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.896 06:08:31 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:24.896 [2024-07-13 06:08:31.174706] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:24.896 06:08:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.896 06:08:31 -- common/autotest_common.sh@10 -- # set +x 00:13:24.896 task offset: 101632 on job bdev=Nvme0n1 fails 00:13:24.896 00:13:24.896 Latency(us) 00:13:24.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.896 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:24.896 Job: Nvme0n1 ended in about 0.50 seconds with error 00:13:24.896 Verification LBA range: start 0x0 length 0x400 00:13:24.896 Nvme0n1 : 0.50 3564.80 222.80 126.96 0.00 17054.46 2475.80 22719.15 00:13:24.896 =================================================================================================================== 00:13:24.896 Total : 3564.80 222.80 126.96 0.00 17054.46 2475.80 22719.15 00:13:24.896 [2024-07-13 06:08:31.176636] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:24.896 [2024-07-13 06:08:31.176665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae6400 (9): Bad file descriptor 00:13:24.896 [2024-07-13 06:08:31.177709] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:13:24.896 [2024-07-13 06:08:31.177879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:13:24.896 [2024-07-13 06:08:31.177910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:24.896 [2024-07-13 06:08:31.177938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:13:24.896 [2024-07-13 06:08:31.177955] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:13:24.896 [2024-07-13 06:08:31.177974] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:13:24.896 [2024-07-13 06:08:31.177987] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xae6400 00:13:24.896 [2024-07-13 06:08:31.178022] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae6400 (9): Bad file descriptor 00:13:24.896 [2024-07-13 06:08:31.178046] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:13:24.896 [2024-07-13 06:08:31.178062] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:13:24.896 [2024-07-13 06:08:31.178077] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:13:24.896 [2024-07-13 06:08:31.178098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:13:24.896 06:08:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.896 06:08:31 -- target/host_management.sh@87 -- # sleep 1 00:13:25.829 06:08:32 -- target/host_management.sh@91 -- # kill -9 1090504 00:13:25.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1090504) - No such process 00:13:25.829 06:08:32 -- target/host_management.sh@91 -- # true 00:13:25.829 06:08:32 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:25.829 06:08:32 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:25.829 06:08:32 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:25.829 06:08:32 -- nvmf/common.sh@520 -- # config=() 00:13:25.829 06:08:32 -- nvmf/common.sh@520 -- # local subsystem config 00:13:25.829 06:08:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:25.829 06:08:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:25.829 { 00:13:25.829 "params": { 00:13:25.829 "name": "Nvme$subsystem", 00:13:25.829 "trtype": "$TEST_TRANSPORT", 00:13:25.829 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:25.829 "adrfam": "ipv4", 00:13:25.829 "trsvcid": "$NVMF_PORT", 00:13:25.829 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:25.829 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:25.829 "hdgst": ${hdgst:-false}, 00:13:25.829 "ddgst": ${ddgst:-false} 00:13:25.829 }, 00:13:25.829 "method": "bdev_nvme_attach_controller" 00:13:25.829 } 00:13:25.829 EOF 00:13:25.829 )") 00:13:25.829 06:08:32 -- nvmf/common.sh@542 -- # cat 00:13:25.829 06:08:32 -- nvmf/common.sh@544 -- # jq . 00:13:25.829 06:08:32 -- nvmf/common.sh@545 -- # IFS=, 00:13:25.829 06:08:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:25.829 "params": { 00:13:25.829 "name": "Nvme0", 00:13:25.829 "trtype": "tcp", 00:13:25.829 "traddr": "10.0.0.2", 00:13:25.829 "adrfam": "ipv4", 00:13:25.829 "trsvcid": "4420", 00:13:25.829 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:25.829 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:25.829 "hdgst": false, 00:13:25.829 "ddgst": false 00:13:25.829 }, 00:13:25.829 "method": "bdev_nvme_attach_controller" 00:13:25.829 }' 00:13:25.829 [2024-07-13 06:08:32.227465] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:25.829 [2024-07-13 06:08:32.227539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090802 ] 00:13:25.829 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.829 [2024-07-13 06:08:32.287596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.086 [2024-07-13 06:08:32.399269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.344 Running I/O for 1 seconds... 00:13:27.275 00:13:27.275 Latency(us) 00:13:27.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:27.275 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:27.275 Verification LBA range: start 0x0 length 0x400 00:13:27.275 Nvme0n1 : 1.01 3838.15 239.88 0.00 0.00 16422.99 1711.22 23592.96 00:13:27.275 =================================================================================================================== 00:13:27.275 Total : 3838.15 239.88 0.00 0.00 16422.99 1711.22 23592.96 00:13:27.533 06:08:33 -- target/host_management.sh@101 -- # stoptarget 00:13:27.533 06:08:33 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:27.533 06:08:33 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:27.533 06:08:33 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:27.533 06:08:33 -- target/host_management.sh@40 -- # nvmftestfini 00:13:27.533 06:08:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:27.533 06:08:33 -- nvmf/common.sh@116 -- # sync 00:13:27.534 06:08:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:27.534 06:08:33 -- nvmf/common.sh@119 -- # set +e 00:13:27.534 06:08:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:27.534 06:08:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:27.534 rmmod nvme_tcp 00:13:27.534 rmmod nvme_fabrics 00:13:27.534 rmmod nvme_keyring 00:13:27.534 06:08:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:27.534 06:08:34 -- nvmf/common.sh@123 -- # set -e 00:13:27.534 06:08:34 -- nvmf/common.sh@124 -- # return 0 00:13:27.534 06:08:34 -- nvmf/common.sh@477 -- # '[' -n 1090317 ']' 00:13:27.534 06:08:34 -- nvmf/common.sh@478 -- # killprocess 1090317 00:13:27.534 06:08:34 -- common/autotest_common.sh@926 -- # '[' -z 1090317 ']' 00:13:27.534 06:08:34 -- common/autotest_common.sh@930 -- # kill -0 1090317 00:13:27.534 06:08:34 -- common/autotest_common.sh@931 -- # uname 00:13:27.534 06:08:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:27.534 06:08:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1090317 00:13:27.792 06:08:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:27.792 06:08:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:27.792 06:08:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1090317' 00:13:27.792 killing process with pid 1090317 00:13:27.792 06:08:34 -- common/autotest_common.sh@945 -- # kill 1090317 00:13:27.792 06:08:34 -- common/autotest_common.sh@950 -- # wait 1090317 00:13:28.051 [2024-07-13 06:08:34.332820] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:28.051 06:08:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:28.051 06:08:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:28.051 06:08:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:28.051 06:08:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:28.051 06:08:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:28.051 06:08:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.051 06:08:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.051 06:08:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.952 06:08:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:29.952 00:13:29.952 real 0m7.534s 00:13:29.952 user 0m23.685s 00:13:29.952 sys 0m1.353s 00:13:29.952 06:08:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.952 06:08:36 -- common/autotest_common.sh@10 -- # set +x 00:13:29.952 ************************************ 00:13:29.952 END TEST nvmf_host_management 00:13:29.952 ************************************ 00:13:29.952 06:08:36 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:13:29.952 00:13:29.952 real 0m9.732s 00:13:29.952 user 0m24.474s 00:13:29.952 sys 0m2.788s 00:13:29.952 06:08:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.952 06:08:36 -- common/autotest_common.sh@10 -- # set +x 00:13:29.952 ************************************ 00:13:29.952 END TEST nvmf_host_management 00:13:29.952 ************************************ 00:13:29.952 06:08:36 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:29.952 06:08:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:29.952 06:08:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:29.952 06:08:36 -- common/autotest_common.sh@10 -- # set +x 00:13:29.952 ************************************ 00:13:29.952 START TEST nvmf_lvol 00:13:29.952 ************************************ 00:13:29.952 06:08:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:30.210 * Looking for test storage... 00:13:30.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.210 06:08:36 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.210 06:08:36 -- nvmf/common.sh@7 -- # uname -s 00:13:30.210 06:08:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.210 06:08:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.210 06:08:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.210 06:08:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.210 06:08:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.210 06:08:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.210 06:08:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.210 06:08:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.210 06:08:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.210 06:08:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.210 06:08:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:30.210 06:08:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:30.210 06:08:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.210 06:08:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.210 06:08:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.210 06:08:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.210 06:08:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.210 06:08:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.210 06:08:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.210 06:08:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.210 06:08:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.211 06:08:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.211 06:08:36 -- paths/export.sh@5 -- # export PATH 00:13:30.211 06:08:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.211 06:08:36 -- nvmf/common.sh@46 -- # : 0 00:13:30.211 06:08:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:30.211 06:08:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:30.211 06:08:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:30.211 06:08:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.211 06:08:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.211 06:08:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:30.211 06:08:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:30.211 06:08:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:30.211 06:08:36 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:30.211 06:08:36 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:30.211 06:08:36 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:30.211 06:08:36 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:30.211 06:08:36 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:30.211 06:08:36 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:30.211 06:08:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:30.211 06:08:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.211 06:08:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:30.211 06:08:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:30.211 06:08:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:30.211 06:08:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.211 06:08:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.211 06:08:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.211 06:08:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:30.211 06:08:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:30.211 06:08:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:30.211 06:08:36 -- common/autotest_common.sh@10 -- # set +x 00:13:32.112 06:08:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:32.112 06:08:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:32.112 06:08:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:32.112 06:08:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:32.112 06:08:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:32.112 06:08:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:32.112 06:08:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:32.112 06:08:38 -- nvmf/common.sh@294 -- # net_devs=() 00:13:32.112 06:08:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:32.112 06:08:38 -- nvmf/common.sh@295 -- # e810=() 00:13:32.112 06:08:38 -- nvmf/common.sh@295 -- # local -ga e810 00:13:32.112 06:08:38 -- nvmf/common.sh@296 -- # x722=() 00:13:32.112 06:08:38 -- nvmf/common.sh@296 -- # local -ga x722 00:13:32.112 06:08:38 -- nvmf/common.sh@297 -- # mlx=() 00:13:32.112 06:08:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:32.112 06:08:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.112 06:08:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.112 06:08:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.112 06:08:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.112 06:08:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.112 06:08:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.112 06:08:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.112 06:08:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.112 06:08:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.112 06:08:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.112 06:08:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.112 06:08:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:32.112 06:08:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:32.112 06:08:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:32.112 06:08:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:32.112 06:08:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:32.112 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:32.112 06:08:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:32.112 06:08:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:32.112 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:32.112 06:08:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:32.112 06:08:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:32.112 06:08:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.112 06:08:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:32.112 06:08:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.112 06:08:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:32.112 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:32.112 06:08:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.112 06:08:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:32.112 06:08:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.112 06:08:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:32.112 06:08:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.112 06:08:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:32.112 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:32.112 06:08:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.112 06:08:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:32.112 06:08:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:32.112 06:08:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:32.112 06:08:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.112 06:08:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.112 06:08:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:32.112 06:08:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:32.112 06:08:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:32.112 06:08:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:32.112 06:08:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:32.112 06:08:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:32.112 06:08:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.112 06:08:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:32.112 06:08:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:32.112 06:08:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:32.112 06:08:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:32.112 06:08:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:32.112 06:08:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:32.112 06:08:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:32.112 06:08:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:32.112 06:08:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:32.112 06:08:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:32.112 06:08:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:32.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:13:32.112 00:13:32.112 --- 10.0.0.2 ping statistics --- 00:13:32.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.112 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:13:32.112 06:08:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:32.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:13:32.112 00:13:32.112 --- 10.0.0.1 ping statistics --- 00:13:32.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.112 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:13:32.112 06:08:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.112 06:08:38 -- nvmf/common.sh@410 -- # return 0 00:13:32.112 06:08:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:32.112 06:08:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.112 06:08:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:32.112 06:08:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.112 06:08:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:32.112 06:08:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:32.112 06:08:38 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:32.112 06:08:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:32.112 06:08:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:32.112 06:08:38 -- common/autotest_common.sh@10 -- # set +x 00:13:32.112 06:08:38 -- nvmf/common.sh@469 -- # nvmfpid=1093033 00:13:32.112 06:08:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:32.112 06:08:38 -- nvmf/common.sh@470 -- # waitforlisten 1093033 00:13:32.112 06:08:38 -- common/autotest_common.sh@819 -- # '[' -z 1093033 ']' 00:13:32.112 06:08:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.112 06:08:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:32.112 06:08:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.112 06:08:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:32.112 06:08:38 -- common/autotest_common.sh@10 -- # set +x 00:13:32.371 [2024-07-13 06:08:38.634819] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:32.371 [2024-07-13 06:08:38.634930] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.371 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.371 [2024-07-13 06:08:38.700823] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:32.371 [2024-07-13 06:08:38.814277] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:32.371 [2024-07-13 06:08:38.814449] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.371 [2024-07-13 06:08:38.814469] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.371 [2024-07-13 06:08:38.814482] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.371 [2024-07-13 06:08:38.814541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.371 [2024-07-13 06:08:38.814608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.371 [2024-07-13 06:08:38.814611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.301 06:08:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:33.301 06:08:39 -- common/autotest_common.sh@852 -- # return 0 00:13:33.301 06:08:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:33.301 06:08:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:33.301 06:08:39 -- common/autotest_common.sh@10 -- # set +x 00:13:33.301 06:08:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.301 06:08:39 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:33.557 [2024-07-13 06:08:39.818365] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.557 06:08:39 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:33.814 06:08:40 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:33.814 06:08:40 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:34.073 06:08:40 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:34.073 06:08:40 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:34.330 06:08:40 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:34.588 06:08:40 -- target/nvmf_lvol.sh@29 -- # lvs=eeb808b6-a2c2-49a1-a2e4-cd921987847e 00:13:34.588 06:08:40 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eeb808b6-a2c2-49a1-a2e4-cd921987847e lvol 20 00:13:34.846 06:08:41 -- target/nvmf_lvol.sh@32 -- # lvol=4ac27fa7-2151-41a2-9cf6-6361c30fb65c 00:13:34.846 06:08:41 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:34.846 06:08:41 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4ac27fa7-2151-41a2-9cf6-6361c30fb65c 00:13:35.104 06:08:41 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:35.362 [2024-07-13 06:08:41.828795] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.362 06:08:41 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:35.620 06:08:42 -- target/nvmf_lvol.sh@42 -- # perf_pid=1093477 00:13:35.620 06:08:42 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:35.620 06:08:42 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:35.620 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.005 06:08:43 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 4ac27fa7-2151-41a2-9cf6-6361c30fb65c MY_SNAPSHOT 00:13:37.005 06:08:43 -- target/nvmf_lvol.sh@47 -- # snapshot=8902bfce-94b8-4af9-a16c-7dbc6e241d4c 00:13:37.005 06:08:43 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 4ac27fa7-2151-41a2-9cf6-6361c30fb65c 30 00:13:37.263 06:08:43 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 8902bfce-94b8-4af9-a16c-7dbc6e241d4c MY_CLONE 00:13:37.520 06:08:43 -- target/nvmf_lvol.sh@49 -- # clone=f29a554f-b864-458d-8bd5-7bd8bc251530 00:13:37.520 06:08:43 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f29a554f-b864-458d-8bd5-7bd8bc251530 00:13:38.087 06:08:44 -- target/nvmf_lvol.sh@53 -- # wait 1093477 00:13:46.198 Initializing NVMe Controllers 00:13:46.198 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:46.198 Controller IO queue size 128, less than required. 00:13:46.198 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:46.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:46.198 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:46.198 Initialization complete. Launching workers. 00:13:46.198 ======================================================== 00:13:46.198 Latency(us) 00:13:46.198 Device Information : IOPS MiB/s Average min max 00:13:46.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10631.30 41.53 12048.16 1485.34 68411.48 00:13:46.198 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10610.10 41.45 12072.02 2243.51 74027.37 00:13:46.198 ======================================================== 00:13:46.198 Total : 21241.40 82.97 12060.08 1485.34 74027.37 00:13:46.198 00:13:46.198 06:08:52 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:46.455 06:08:52 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4ac27fa7-2151-41a2-9cf6-6361c30fb65c 00:13:46.712 06:08:53 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eeb808b6-a2c2-49a1-a2e4-cd921987847e 00:13:46.969 06:08:53 -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:46.969 06:08:53 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:46.969 06:08:53 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:46.969 06:08:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:46.969 06:08:53 -- nvmf/common.sh@116 -- # sync 00:13:46.969 06:08:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:46.969 06:08:53 -- nvmf/common.sh@119 -- # set +e 00:13:46.969 06:08:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:46.969 06:08:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:46.969 rmmod nvme_tcp 00:13:46.969 rmmod nvme_fabrics 00:13:46.969 rmmod nvme_keyring 00:13:46.969 06:08:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:46.969 06:08:53 -- nvmf/common.sh@123 -- # set -e 00:13:46.969 06:08:53 -- nvmf/common.sh@124 -- # return 0 00:13:46.969 06:08:53 -- nvmf/common.sh@477 -- # '[' -n 1093033 ']' 00:13:46.969 06:08:53 -- nvmf/common.sh@478 -- # killprocess 1093033 00:13:46.969 06:08:53 -- common/autotest_common.sh@926 -- # '[' -z 1093033 ']' 00:13:46.969 06:08:53 -- common/autotest_common.sh@930 -- # kill -0 1093033 00:13:46.969 06:08:53 -- common/autotest_common.sh@931 -- # uname 00:13:46.969 06:08:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:46.969 06:08:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1093033 00:13:46.969 06:08:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:46.969 06:08:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:46.969 06:08:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1093033' 00:13:46.969 killing process with pid 1093033 00:13:46.969 06:08:53 -- common/autotest_common.sh@945 -- # kill 1093033 00:13:46.969 06:08:53 -- common/autotest_common.sh@950 -- # wait 1093033 00:13:47.535 06:08:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:47.536 06:08:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:47.536 06:08:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:47.536 06:08:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:47.536 06:08:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:47.536 06:08:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.536 06:08:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.536 06:08:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.436 06:08:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:49.436 00:13:49.436 real 0m19.374s 00:13:49.436 user 1m6.413s 00:13:49.436 sys 0m5.338s 00:13:49.436 06:08:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.436 06:08:55 -- common/autotest_common.sh@10 -- # set +x 00:13:49.436 ************************************ 00:13:49.436 END TEST nvmf_lvol 00:13:49.436 ************************************ 00:13:49.436 06:08:55 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:49.436 06:08:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:49.436 06:08:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:49.436 06:08:55 -- common/autotest_common.sh@10 -- # set +x 00:13:49.436 ************************************ 00:13:49.436 START TEST nvmf_lvs_grow 00:13:49.436 ************************************ 00:13:49.436 06:08:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:49.436 * Looking for test storage... 00:13:49.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.436 06:08:55 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.436 06:08:55 -- nvmf/common.sh@7 -- # uname -s 00:13:49.436 06:08:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.436 06:08:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.436 06:08:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.436 06:08:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.436 06:08:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.436 06:08:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.436 06:08:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.436 06:08:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.436 06:08:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.436 06:08:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.436 06:08:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:49.436 06:08:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:49.436 06:08:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.436 06:08:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.436 06:08:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.436 06:08:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.436 06:08:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.436 06:08:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.436 06:08:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.436 06:08:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.436 06:08:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.436 06:08:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.436 06:08:55 -- paths/export.sh@5 -- # export PATH 00:13:49.436 06:08:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.436 06:08:55 -- nvmf/common.sh@46 -- # : 0 00:13:49.436 06:08:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:49.436 06:08:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:49.436 06:08:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:49.436 06:08:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.436 06:08:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.436 06:08:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:49.436 06:08:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:49.436 06:08:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:49.436 06:08:55 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:49.436 06:08:55 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:49.436 06:08:55 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:13:49.436 06:08:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:49.436 06:08:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.436 06:08:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:49.436 06:08:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:49.436 06:08:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:49.436 06:08:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.436 06:08:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.436 06:08:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.436 06:08:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:49.436 06:08:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:49.436 06:08:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:49.436 06:08:55 -- common/autotest_common.sh@10 -- # set +x 00:13:51.333 06:08:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:51.333 06:08:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:51.333 06:08:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:51.333 06:08:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:51.333 06:08:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:51.333 06:08:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:51.333 06:08:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:51.333 06:08:57 -- nvmf/common.sh@294 -- # net_devs=() 00:13:51.333 06:08:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:51.333 06:08:57 -- nvmf/common.sh@295 -- # e810=() 00:13:51.333 06:08:57 -- nvmf/common.sh@295 -- # local -ga e810 00:13:51.333 06:08:57 -- nvmf/common.sh@296 -- # x722=() 00:13:51.333 06:08:57 -- nvmf/common.sh@296 -- # local -ga x722 00:13:51.333 06:08:57 -- nvmf/common.sh@297 -- # mlx=() 00:13:51.333 06:08:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:51.333 06:08:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.333 06:08:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.333 06:08:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.333 06:08:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.333 06:08:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.333 06:08:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.333 06:08:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.333 06:08:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.333 06:08:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.333 06:08:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.333 06:08:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.333 06:08:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:51.333 06:08:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:51.333 06:08:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:51.333 06:08:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:51.333 06:08:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:51.333 06:08:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:51.333 06:08:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:51.333 06:08:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:51.333 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:51.333 06:08:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:51.333 06:08:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:51.333 06:08:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.333 06:08:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.333 06:08:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:51.333 06:08:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:51.333 06:08:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:51.333 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:51.333 06:08:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:51.333 06:08:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:51.333 06:08:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.333 06:08:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.333 06:08:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:51.334 06:08:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:51.334 06:08:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:51.334 06:08:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:51.334 06:08:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:51.334 06:08:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.334 06:08:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:51.334 06:08:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.590 06:08:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:51.590 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:51.590 06:08:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.590 06:08:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:51.590 06:08:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.590 06:08:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:51.590 06:08:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.590 06:08:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:51.590 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:51.590 06:08:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.590 06:08:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:51.590 06:08:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:51.590 06:08:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:51.590 06:08:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:51.590 06:08:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:51.590 06:08:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.590 06:08:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.590 06:08:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:51.590 06:08:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:51.590 06:08:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:51.590 06:08:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:51.590 06:08:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:51.590 06:08:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:51.590 06:08:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.590 06:08:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:51.590 06:08:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:51.590 06:08:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:51.590 06:08:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:51.590 06:08:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:51.590 06:08:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:51.590 06:08:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:51.590 06:08:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:51.590 06:08:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:51.590 06:08:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:51.590 06:08:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:51.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:13:51.590 00:13:51.590 --- 10.0.0.2 ping statistics --- 00:13:51.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.590 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:13:51.590 06:08:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:51.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:13:51.590 00:13:51.590 --- 10.0.0.1 ping statistics --- 00:13:51.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.590 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:13:51.590 06:08:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.590 06:08:57 -- nvmf/common.sh@410 -- # return 0 00:13:51.590 06:08:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:51.590 06:08:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.590 06:08:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:51.590 06:08:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:51.590 06:08:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.590 06:08:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:51.590 06:08:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:51.590 06:08:57 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:13:51.590 06:08:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:51.590 06:08:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:51.590 06:08:57 -- common/autotest_common.sh@10 -- # set +x 00:13:51.590 06:08:58 -- nvmf/common.sh@469 -- # nvmfpid=1096787 00:13:51.590 06:08:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:51.590 06:08:58 -- nvmf/common.sh@470 -- # waitforlisten 1096787 00:13:51.590 06:08:58 -- common/autotest_common.sh@819 -- # '[' -z 1096787 ']' 00:13:51.590 06:08:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.590 06:08:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:51.590 06:08:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.590 06:08:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:51.590 06:08:58 -- common/autotest_common.sh@10 -- # set +x 00:13:51.590 [2024-07-13 06:08:58.048393] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:51.590 [2024-07-13 06:08:58.048491] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.590 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.848 [2024-07-13 06:08:58.118646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.848 [2024-07-13 06:08:58.231784] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:51.848 [2024-07-13 06:08:58.231986] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.848 [2024-07-13 06:08:58.232009] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.848 [2024-07-13 06:08:58.232023] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.848 [2024-07-13 06:08:58.232054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.799 06:08:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:52.799 06:08:58 -- common/autotest_common.sh@852 -- # return 0 00:13:52.799 06:08:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:52.799 06:08:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:52.799 06:08:58 -- common/autotest_common.sh@10 -- # set +x 00:13:52.799 06:08:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.799 06:08:58 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:52.799 [2024-07-13 06:08:59.204334] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.799 06:08:59 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:13:52.799 06:08:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:52.799 06:08:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:52.799 06:08:59 -- common/autotest_common.sh@10 -- # set +x 00:13:52.799 ************************************ 00:13:52.799 START TEST lvs_grow_clean 00:13:52.799 ************************************ 00:13:52.799 06:08:59 -- common/autotest_common.sh@1104 -- # lvs_grow 00:13:52.799 06:08:59 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:13:52.799 06:08:59 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:13:52.799 06:08:59 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:13:52.799 06:08:59 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:13:52.799 06:08:59 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:13:52.799 06:08:59 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:13:52.799 06:08:59 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:52.799 06:08:59 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:52.799 06:08:59 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:53.060 06:08:59 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:13:53.060 06:08:59 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:13:53.319 06:08:59 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c93d5ea1-b02e-4746-a981-dbc4321d5162 00:13:53.319 06:08:59 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c93d5ea1-b02e-4746-a981-dbc4321d5162 00:13:53.319 06:08:59 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:13:53.576 06:09:00 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:13:53.576 06:09:00 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:13:53.576 06:09:00 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c93d5ea1-b02e-4746-a981-dbc4321d5162 lvol 150 00:13:53.834 06:09:00 -- target/nvmf_lvs_grow.sh@33 -- # lvol=a1a2a2c4-4dc0-402a-a716-c59a04806b69 00:13:53.834 06:09:00 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:53.834 06:09:00 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:13:54.093 [2024-07-13 06:09:00.492233] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:13:54.093 [2024-07-13 06:09:00.492333] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:13:54.093 true 00:13:54.093 06:09:00 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c93d5ea1-b02e-4746-a981-dbc4321d5162 00:13:54.093 06:09:00 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:13:54.351 06:09:00 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:13:54.351 06:09:00 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:54.608 06:09:00 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a1a2a2c4-4dc0-402a-a716-c59a04806b69 00:13:54.865 06:09:01 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:55.124 [2024-07-13 06:09:01.459272] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.124 06:09:01 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:55.382 06:09:01 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1097344 00:13:55.382 06:09:01 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:13:55.382 06:09:01 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:55.382 06:09:01 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1097344 /var/tmp/bdevperf.sock 00:13:55.382 06:09:01 -- common/autotest_common.sh@819 -- # '[' -z 1097344 ']' 00:13:55.382 06:09:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:55.382 06:09:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:55.382 06:09:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:55.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:55.382 06:09:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:55.382 06:09:01 -- common/autotest_common.sh@10 -- # set +x 00:13:55.382 [2024-07-13 06:09:01.751955] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:55.382 [2024-07-13 06:09:01.752032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1097344 ] 00:13:55.382 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.382 [2024-07-13 06:09:01.816275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:55.638 [2024-07-13 06:09:01.940371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.201 06:09:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:56.201 06:09:02 -- common/autotest_common.sh@852 -- # return 0 00:13:56.201 06:09:02 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:13:56.767 Nvme0n1 00:13:56.767 06:09:03 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:13:57.025 [ 00:13:57.025 { 00:13:57.025 "name": "Nvme0n1", 00:13:57.025 "aliases": [ 00:13:57.025 "a1a2a2c4-4dc0-402a-a716-c59a04806b69" 00:13:57.025 ], 00:13:57.025 "product_name": "NVMe disk", 00:13:57.025 "block_size": 4096, 00:13:57.025 "num_blocks": 38912, 00:13:57.025 "uuid": "a1a2a2c4-4dc0-402a-a716-c59a04806b69", 00:13:57.025 "assigned_rate_limits": { 00:13:57.025 "rw_ios_per_sec": 0, 00:13:57.025 "rw_mbytes_per_sec": 0, 00:13:57.025 "r_mbytes_per_sec": 0, 00:13:57.025 "w_mbytes_per_sec": 0 00:13:57.025 }, 00:13:57.025 "claimed": false, 00:13:57.025 "zoned": false, 00:13:57.025 "supported_io_types": { 00:13:57.025 "read": true, 00:13:57.025 "write": true, 00:13:57.025 "unmap": true, 00:13:57.025 "write_zeroes": true, 00:13:57.025 "flush": true, 00:13:57.025 "reset": true, 00:13:57.025 "compare": true, 00:13:57.025 "compare_and_write": true, 00:13:57.025 "abort": true, 00:13:57.025 "nvme_admin": true, 00:13:57.025 "nvme_io": true 00:13:57.025 }, 00:13:57.025 "driver_specific": { 00:13:57.025 "nvme": [ 00:13:57.025 { 00:13:57.025 "trid": { 00:13:57.025 "trtype": "TCP", 00:13:57.025 "adrfam": "IPv4", 00:13:57.025 "traddr": "10.0.0.2", 00:13:57.025 "trsvcid": "4420", 00:13:57.025 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:13:57.025 }, 00:13:57.025 "ctrlr_data": { 00:13:57.025 "cntlid": 1, 00:13:57.025 "vendor_id": "0x8086", 00:13:57.025 "model_number": "SPDK bdev Controller", 00:13:57.025 "serial_number": "SPDK0", 00:13:57.025 "firmware_revision": "24.01.1", 00:13:57.025 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:57.025 "oacs": { 00:13:57.025 "security": 0, 00:13:57.025 "format": 0, 00:13:57.025 "firmware": 0, 00:13:57.025 "ns_manage": 0 00:13:57.025 }, 00:13:57.025 "multi_ctrlr": true, 00:13:57.025 "ana_reporting": false 00:13:57.025 }, 00:13:57.025 "vs": { 00:13:57.025 "nvme_version": "1.3" 00:13:57.025 }, 00:13:57.025 "ns_data": { 00:13:57.025 "id": 1, 00:13:57.025 "can_share": true 00:13:57.025 } 00:13:57.025 } 00:13:57.025 ], 00:13:57.025 "mp_policy": "active_passive" 00:13:57.025 } 00:13:57.025 } 00:13:57.025 ] 00:13:57.025 06:09:03 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1097621 00:13:57.025 06:09:03 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:13:57.025 06:09:03 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:57.025 Running I/O for 10 seconds... 00:13:58.397 Latency(us) 00:13:58.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:58.397 Nvme0n1 : 1.00 14398.00 56.24 0.00 0.00 0.00 0.00 0.00 00:13:58.397 =================================================================================================================== 00:13:58.398 Total : 14398.00 56.24 0.00 0.00 0.00 0.00 0.00 00:13:58.398 00:13:58.963 06:09:05 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c93d5ea1-b02e-4746-a981-dbc4321d5162 00:13:59.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:59.221 Nvme0n1 : 2.00 14558.50 56.87 0.00 0.00 0.00 0.00 0.00 00:13:59.221 =================================================================================================================== 00:13:59.221 Total : 14558.50 56.87 0.00 0.00 0.00 0.00 0.00 00:13:59.221 00:13:59.221 true 00:13:59.221 06:09:05 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c93d5ea1-b02e-4746-a981-dbc4321d5162 00:13:59.221 06:09:05 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:13:59.479 06:09:05 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:13:59.479 06:09:05 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:13:59.479 06:09:05 -- target/nvmf_lvs_grow.sh@65 -- # wait 1097621 00:14:00.045 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:00.045 Nvme0n1 : 3.00 14675.67 57.33 0.00 0.00 0.00 0.00 0.00 00:14:00.045 =================================================================================================================== 00:14:00.045 Total : 14675.67 57.33 0.00 0.00 0.00 0.00 0.00 00:14:00.045 00:14:01.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:01.418 Nvme0n1 : 4.00 14718.75 57.50 0.00 0.00 0.00 0.00 0.00 00:14:01.418 =================================================================================================================== 00:14:01.418 Total : 14718.75 57.50 0.00 0.00 0.00 0.00 0.00 00:14:01.418 00:14:02.351 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:02.351 Nvme0n1 : 5.00 14744.40 57.60 0.00 0.00 0.00 0.00 0.00 00:14:02.351 =================================================================================================================== 00:14:02.351 Total : 14744.40 57.60 0.00 0.00 0.00 0.00 0.00 00:14:02.351 00:14:03.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:03.286 Nvme0n1 : 6.00 14782.00 57.74 0.00 0.00 0.00 0.00 0.00 00:14:03.286 =================================================================================================================== 00:14:03.286 Total : 14782.00 57.74 0.00 0.00 0.00 0.00 0.00 00:14:03.286 00:14:04.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:04.220 Nvme0n1 : 7.00 14818.14 57.88 0.00 0.00 0.00 0.00 0.00 00:14:04.220 =================================================================================================================== 00:14:04.220 Total : 14818.14 57.88 0.00 0.00 0.00 0.00 0.00 00:14:04.220 00:14:05.155 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:05.155 Nvme0n1 : 8.00 14853.88 58.02 0.00 0.00 0.00 0.00 0.00 00:14:05.155 =================================================================================================================== 00:14:05.155 Total : 14853.88 58.02 0.00 0.00 0.00 0.00 0.00 00:14:05.155 00:14:06.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:06.091 Nvme0n1 : 9.00 14874.44 58.10 0.00 0.00 0.00 0.00 0.00 00:14:06.091 =================================================================================================================== 00:14:06.091 Total : 14874.44 58.10 0.00 0.00 0.00 0.00 0.00 00:14:06.091 00:14:07.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.023 Nvme0n1 : 10.00 14891.00 58.17 0.00 0.00 0.00 0.00 0.00 00:14:07.023 =================================================================================================================== 00:14:07.023 Total : 14891.00 58.17 0.00 0.00 0.00 0.00 0.00 00:14:07.023 00:14:07.023 00:14:07.023 Latency(us) 00:14:07.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.023 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.023 Nvme0n1 : 10.01 14891.09 58.17 0.00 0.00 8589.92 4466.16 15534.46 00:14:07.023 =================================================================================================================== 00:14:07.023 Total : 14891.09 58.17 0.00 0.00 8589.92 4466.16 15534.46 00:14:07.023 0 00:14:07.023 06:09:13 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1097344 00:14:07.023 06:09:13 -- common/autotest_common.sh@926 -- # '[' -z 1097344 ']' 00:14:07.023 06:09:13 -- common/autotest_common.sh@930 -- # kill -0 1097344 00:14:07.023 06:09:13 -- common/autotest_common.sh@931 -- # uname 00:14:07.023 06:09:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:07.023 06:09:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1097344 00:14:07.280 06:09:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:07.280 06:09:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:07.280 06:09:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1097344' 00:14:07.280 killing process with pid 1097344 00:14:07.280 06:09:13 -- common/autotest_common.sh@945 -- # kill 1097344 00:14:07.280 Received shutdown signal, test time was about 10.000000 seconds 00:14:07.280 00:14:07.280 Latency(us) 00:14:07.280 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.280 =================================================================================================================== 00:14:07.280 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:07.280 06:09:13 -- common/autotest_common.sh@950 -- # wait 1097344 00:14:07.538 06:09:13 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:07.795 06:09:14 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c93d5ea1-b02e-4746-a981-dbc4321d5162 00:14:07.795 06:09:14 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:08.053 06:09:14 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:08.053 06:09:14 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:08.053 06:09:14 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:08.311 [2024-07-13 06:09:14.587666] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:08.311 06:09:14 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c93d5ea1-b02e-4746-a981-dbc4321d5162 00:14:08.311 06:09:14 -- common/autotest_common.sh@640 -- # local es=0 00:14:08.311 06:09:14 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c93d5ea1-b02e-4746-a981-dbc4321d5162 00:14:08.311 06:09:14 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:08.311 06:09:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:08.311 06:09:14 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:08.311 06:09:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:08.311 06:09:14 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:08.311 06:09:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:08.311 06:09:14 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:08.311 06:09:14 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:08.311 06:09:14 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c93d5ea1-b02e-4746-a981-dbc4321d5162 00:14:08.569 request: 00:14:08.569 { 00:14:08.569 "uuid": "c93d5ea1-b02e-4746-a981-dbc4321d5162", 00:14:08.569 "method": "bdev_lvol_get_lvstores", 00:14:08.569 "req_id": 1 00:14:08.569 } 00:14:08.569 Got JSON-RPC error response 00:14:08.569 response: 00:14:08.569 { 00:14:08.569 "code": -19, 00:14:08.569 "message": "No such device" 00:14:08.569 } 00:14:08.569 06:09:14 -- common/autotest_common.sh@643 -- # es=1 00:14:08.569 06:09:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:08.569 06:09:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:08.569 06:09:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:08.569 06:09:14 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:08.827 aio_bdev 00:14:08.827 06:09:15 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev a1a2a2c4-4dc0-402a-a716-c59a04806b69 00:14:08.827 06:09:15 -- common/autotest_common.sh@887 -- # local bdev_name=a1a2a2c4-4dc0-402a-a716-c59a04806b69 00:14:08.827 06:09:15 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:08.827 06:09:15 -- common/autotest_common.sh@889 -- # local i 00:14:08.827 06:09:15 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:08.827 06:09:15 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:08.827 06:09:15 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:08.827 06:09:15 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a1a2a2c4-4dc0-402a-a716-c59a04806b69 -t 2000 00:14:09.085 [ 00:14:09.085 { 00:14:09.085 "name": "a1a2a2c4-4dc0-402a-a716-c59a04806b69", 00:14:09.085 "aliases": [ 00:14:09.085 "lvs/lvol" 00:14:09.085 ], 00:14:09.085 "product_name": "Logical Volume", 00:14:09.085 "block_size": 4096, 00:14:09.085 "num_blocks": 38912, 00:14:09.085 "uuid": "a1a2a2c4-4dc0-402a-a716-c59a04806b69", 00:14:09.085 "assigned_rate_limits": { 00:14:09.085 "rw_ios_per_sec": 0, 00:14:09.085 "rw_mbytes_per_sec": 0, 00:14:09.085 "r_mbytes_per_sec": 0, 00:14:09.085 "w_mbytes_per_sec": 0 00:14:09.085 }, 00:14:09.085 "claimed": false, 00:14:09.085 "zoned": false, 00:14:09.085 "supported_io_types": { 00:14:09.085 "read": true, 00:14:09.085 "write": true, 00:14:09.085 "unmap": true, 00:14:09.085 "write_zeroes": true, 00:14:09.085 "flush": false, 00:14:09.085 "reset": true, 00:14:09.085 "compare": false, 00:14:09.085 "compare_and_write": false, 00:14:09.085 "abort": false, 00:14:09.085 "nvme_admin": false, 00:14:09.085 "nvme_io": false 00:14:09.085 }, 00:14:09.085 "driver_specific": { 00:14:09.085 "lvol": { 00:14:09.085 "lvol_store_uuid": "c93d5ea1-b02e-4746-a981-dbc4321d5162", 00:14:09.085 "base_bdev": "aio_bdev", 00:14:09.085 "thin_provision": false, 00:14:09.085 "snapshot": false, 00:14:09.085 "clone": false, 00:14:09.085 "esnap_clone": false 00:14:09.085 } 00:14:09.085 } 00:14:09.085 } 00:14:09.085 ] 00:14:09.085 06:09:15 -- common/autotest_common.sh@895 -- # return 0 00:14:09.085 06:09:15 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c93d5ea1-b02e-4746-a981-dbc4321d5162 00:14:09.085 06:09:15 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:09.343 06:09:15 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:09.343 06:09:15 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c93d5ea1-b02e-4746-a981-dbc4321d5162 00:14:09.343 06:09:15 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:09.600 06:09:16 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:09.600 06:09:16 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a1a2a2c4-4dc0-402a-a716-c59a04806b69 00:14:09.858 06:09:16 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c93d5ea1-b02e-4746-a981-dbc4321d5162 00:14:10.116 06:09:16 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:10.374 06:09:16 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:10.374 00:14:10.374 real 0m17.630s 00:14:10.374 user 0m17.342s 00:14:10.374 sys 0m1.916s 00:14:10.374 06:09:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:10.374 06:09:16 -- common/autotest_common.sh@10 -- # set +x 00:14:10.374 ************************************ 00:14:10.374 END TEST lvs_grow_clean 00:14:10.374 ************************************ 00:14:10.374 06:09:16 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:10.374 06:09:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:10.374 06:09:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:10.374 06:09:16 -- common/autotest_common.sh@10 -- # set +x 00:14:10.374 ************************************ 00:14:10.374 START TEST lvs_grow_dirty 00:14:10.374 ************************************ 00:14:10.374 06:09:16 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:14:10.374 06:09:16 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:10.374 06:09:16 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:10.374 06:09:16 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:10.374 06:09:16 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:10.374 06:09:16 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:10.374 06:09:16 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:10.374 06:09:16 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:10.632 06:09:16 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:10.632 06:09:16 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:10.889 06:09:17 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:10.889 06:09:17 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:11.147 06:09:17 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a8e9ffcd-f339-4d14-ada6-30851e425551 00:14:11.147 06:09:17 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8e9ffcd-f339-4d14-ada6-30851e425551 00:14:11.147 06:09:17 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:11.147 06:09:17 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:11.147 06:09:17 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:11.405 06:09:17 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a8e9ffcd-f339-4d14-ada6-30851e425551 lvol 150 00:14:11.405 06:09:17 -- target/nvmf_lvs_grow.sh@33 -- # lvol=7b8aad49-e73b-448f-92a1-19b8feb5f400 00:14:11.405 06:09:17 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:11.405 06:09:17 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:11.663 [2024-07-13 06:09:18.123045] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:11.663 [2024-07-13 06:09:18.123145] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:11.663 true 00:14:11.663 06:09:18 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8e9ffcd-f339-4d14-ada6-30851e425551 00:14:11.663 06:09:18 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:11.920 06:09:18 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:11.920 06:09:18 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:12.178 06:09:18 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7b8aad49-e73b-448f-92a1-19b8feb5f400 00:14:12.435 06:09:18 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:12.693 06:09:19 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:12.951 06:09:19 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1100094 00:14:12.951 06:09:19 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:12.951 06:09:19 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:12.951 06:09:19 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1100094 /var/tmp/bdevperf.sock 00:14:12.951 06:09:19 -- common/autotest_common.sh@819 -- # '[' -z 1100094 ']' 00:14:12.951 06:09:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:12.951 06:09:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:12.951 06:09:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:12.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:12.951 06:09:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:12.951 06:09:19 -- common/autotest_common.sh@10 -- # set +x 00:14:12.951 [2024-07-13 06:09:19.365764] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:12.951 [2024-07-13 06:09:19.365834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1100094 ] 00:14:12.951 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.951 [2024-07-13 06:09:19.427353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.209 [2024-07-13 06:09:19.541817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.143 06:09:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:14.143 06:09:20 -- common/autotest_common.sh@852 -- # return 0 00:14:14.143 06:09:20 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:14.402 Nvme0n1 00:14:14.402 06:09:20 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:14.660 [ 00:14:14.660 { 00:14:14.660 "name": "Nvme0n1", 00:14:14.660 "aliases": [ 00:14:14.660 "7b8aad49-e73b-448f-92a1-19b8feb5f400" 00:14:14.660 ], 00:14:14.660 "product_name": "NVMe disk", 00:14:14.660 "block_size": 4096, 00:14:14.660 "num_blocks": 38912, 00:14:14.660 "uuid": "7b8aad49-e73b-448f-92a1-19b8feb5f400", 00:14:14.660 "assigned_rate_limits": { 00:14:14.660 "rw_ios_per_sec": 0, 00:14:14.660 "rw_mbytes_per_sec": 0, 00:14:14.660 "r_mbytes_per_sec": 0, 00:14:14.660 "w_mbytes_per_sec": 0 00:14:14.660 }, 00:14:14.660 "claimed": false, 00:14:14.660 "zoned": false, 00:14:14.660 "supported_io_types": { 00:14:14.660 "read": true, 00:14:14.660 "write": true, 00:14:14.660 "unmap": true, 00:14:14.660 "write_zeroes": true, 00:14:14.660 "flush": true, 00:14:14.660 "reset": true, 00:14:14.660 "compare": true, 00:14:14.660 "compare_and_write": true, 00:14:14.660 "abort": true, 00:14:14.660 "nvme_admin": true, 00:14:14.660 "nvme_io": true 00:14:14.660 }, 00:14:14.660 "driver_specific": { 00:14:14.660 "nvme": [ 00:14:14.660 { 00:14:14.660 "trid": { 00:14:14.660 "trtype": "TCP", 00:14:14.660 "adrfam": "IPv4", 00:14:14.660 "traddr": "10.0.0.2", 00:14:14.660 "trsvcid": "4420", 00:14:14.660 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:14.660 }, 00:14:14.660 "ctrlr_data": { 00:14:14.660 "cntlid": 1, 00:14:14.660 "vendor_id": "0x8086", 00:14:14.660 "model_number": "SPDK bdev Controller", 00:14:14.660 "serial_number": "SPDK0", 00:14:14.660 "firmware_revision": "24.01.1", 00:14:14.660 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:14.660 "oacs": { 00:14:14.660 "security": 0, 00:14:14.660 "format": 0, 00:14:14.660 "firmware": 0, 00:14:14.660 "ns_manage": 0 00:14:14.660 }, 00:14:14.660 "multi_ctrlr": true, 00:14:14.660 "ana_reporting": false 00:14:14.660 }, 00:14:14.660 "vs": { 00:14:14.660 "nvme_version": "1.3" 00:14:14.660 }, 00:14:14.660 "ns_data": { 00:14:14.660 "id": 1, 00:14:14.660 "can_share": true 00:14:14.660 } 00:14:14.660 } 00:14:14.660 ], 00:14:14.661 "mp_policy": "active_passive" 00:14:14.661 } 00:14:14.661 } 00:14:14.661 ] 00:14:14.661 06:09:21 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1100298 00:14:14.661 06:09:21 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:14.661 06:09:21 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:14.919 Running I/O for 10 seconds... 00:14:15.855 Latency(us) 00:14:15.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.855 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.855 Nvme0n1 : 1.00 14336.00 56.00 0.00 0.00 0.00 0.00 0.00 00:14:15.855 =================================================================================================================== 00:14:15.855 Total : 14336.00 56.00 0.00 0.00 0.00 0.00 0.00 00:14:15.855 00:14:16.791 06:09:23 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a8e9ffcd-f339-4d14-ada6-30851e425551 00:14:16.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:16.791 Nvme0n1 : 2.00 14495.50 56.62 0.00 0.00 0.00 0.00 0.00 00:14:16.791 =================================================================================================================== 00:14:16.791 Total : 14495.50 56.62 0.00 0.00 0.00 0.00 0.00 00:14:16.791 00:14:17.048 true 00:14:17.048 06:09:23 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8e9ffcd-f339-4d14-ada6-30851e425551 00:14:17.048 06:09:23 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:17.306 06:09:23 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:17.306 06:09:23 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:17.306 06:09:23 -- target/nvmf_lvs_grow.sh@65 -- # wait 1100298 00:14:17.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.873 Nvme0n1 : 3.00 14569.67 56.91 0.00 0.00 0.00 0.00 0.00 00:14:17.873 =================================================================================================================== 00:14:17.873 Total : 14569.67 56.91 0.00 0.00 0.00 0.00 0.00 00:14:17.873 00:14:18.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.808 Nvme0n1 : 4.00 14638.50 57.18 0.00 0.00 0.00 0.00 0.00 00:14:18.808 =================================================================================================================== 00:14:18.808 Total : 14638.50 57.18 0.00 0.00 0.00 0.00 0.00 00:14:18.808 00:14:19.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.743 Nvme0n1 : 5.00 14706.00 57.45 0.00 0.00 0.00 0.00 0.00 00:14:19.743 =================================================================================================================== 00:14:19.743 Total : 14706.00 57.45 0.00 0.00 0.00 0.00 0.00 00:14:19.743 00:14:21.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.120 Nvme0n1 : 6.00 14761.33 57.66 0.00 0.00 0.00 0.00 0.00 00:14:21.120 =================================================================================================================== 00:14:21.120 Total : 14761.33 57.66 0.00 0.00 0.00 0.00 0.00 00:14:21.120 00:14:22.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.055 Nvme0n1 : 7.00 14801.14 57.82 0.00 0.00 0.00 0.00 0.00 00:14:22.055 =================================================================================================================== 00:14:22.055 Total : 14801.14 57.82 0.00 0.00 0.00 0.00 0.00 00:14:22.055 00:14:22.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.991 Nvme0n1 : 8.00 14838.75 57.96 0.00 0.00 0.00 0.00 0.00 00:14:22.991 =================================================================================================================== 00:14:22.991 Total : 14838.75 57.96 0.00 0.00 0.00 0.00 0.00 00:14:22.991 00:14:23.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.925 Nvme0n1 : 9.00 14868.33 58.08 0.00 0.00 0.00 0.00 0.00 00:14:23.925 =================================================================================================================== 00:14:23.925 Total : 14868.33 58.08 0.00 0.00 0.00 0.00 0.00 00:14:23.925 00:14:24.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.860 Nvme0n1 : 10.00 14893.40 58.18 0.00 0.00 0.00 0.00 0.00 00:14:24.860 =================================================================================================================== 00:14:24.860 Total : 14893.40 58.18 0.00 0.00 0.00 0.00 0.00 00:14:24.860 00:14:24.860 00:14:24.860 Latency(us) 00:14:24.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.860 Nvme0n1 : 10.01 14895.62 58.19 0.00 0.00 8587.65 2172.40 16602.45 00:14:24.860 =================================================================================================================== 00:14:24.860 Total : 14895.62 58.19 0.00 0.00 8587.65 2172.40 16602.45 00:14:24.860 0 00:14:24.860 06:09:31 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1100094 00:14:24.860 06:09:31 -- common/autotest_common.sh@926 -- # '[' -z 1100094 ']' 00:14:24.860 06:09:31 -- common/autotest_common.sh@930 -- # kill -0 1100094 00:14:24.860 06:09:31 -- common/autotest_common.sh@931 -- # uname 00:14:24.860 06:09:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:24.860 06:09:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1100094 00:14:24.860 06:09:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:24.860 06:09:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:24.860 06:09:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1100094' 00:14:24.860 killing process with pid 1100094 00:14:24.860 06:09:31 -- common/autotest_common.sh@945 -- # kill 1100094 00:14:24.860 Received shutdown signal, test time was about 10.000000 seconds 00:14:24.860 00:14:24.860 Latency(us) 00:14:24.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.860 =================================================================================================================== 00:14:24.860 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:24.860 06:09:31 -- common/autotest_common.sh@950 -- # wait 1100094 00:14:25.117 06:09:31 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:25.375 06:09:31 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8e9ffcd-f339-4d14-ada6-30851e425551 00:14:25.375 06:09:31 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:25.639 06:09:32 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:25.639 06:09:32 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:25.639 06:09:32 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1096787 00:14:25.639 06:09:32 -- target/nvmf_lvs_grow.sh@74 -- # wait 1096787 00:14:25.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1096787 Killed "${NVMF_APP[@]}" "$@" 00:14:25.639 06:09:32 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:25.639 06:09:32 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:25.639 06:09:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:25.639 06:09:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:25.639 06:09:32 -- common/autotest_common.sh@10 -- # set +x 00:14:25.639 06:09:32 -- nvmf/common.sh@469 -- # nvmfpid=1101609 00:14:25.639 06:09:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:25.639 06:09:32 -- nvmf/common.sh@470 -- # waitforlisten 1101609 00:14:25.639 06:09:32 -- common/autotest_common.sh@819 -- # '[' -z 1101609 ']' 00:14:25.639 06:09:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.639 06:09:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:25.639 06:09:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.639 06:09:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:25.639 06:09:32 -- common/autotest_common.sh@10 -- # set +x 00:14:25.639 [2024-07-13 06:09:32.103747] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:25.639 [2024-07-13 06:09:32.103839] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.639 EAL: No free 2048 kB hugepages reported on node 1 00:14:25.901 [2024-07-13 06:09:32.170517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.901 [2024-07-13 06:09:32.277523] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:25.901 [2024-07-13 06:09:32.277687] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.901 [2024-07-13 06:09:32.277704] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.901 [2024-07-13 06:09:32.277716] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.901 [2024-07-13 06:09:32.277744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.831 06:09:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:26.831 06:09:33 -- common/autotest_common.sh@852 -- # return 0 00:14:26.831 06:09:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:26.831 06:09:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:26.831 06:09:33 -- common/autotest_common.sh@10 -- # set +x 00:14:26.831 06:09:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:26.831 06:09:33 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:26.831 [2024-07-13 06:09:33.324122] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:26.831 [2024-07-13 06:09:33.324263] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:26.831 [2024-07-13 06:09:33.324320] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:27.088 06:09:33 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:27.088 06:09:33 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 7b8aad49-e73b-448f-92a1-19b8feb5f400 00:14:27.088 06:09:33 -- common/autotest_common.sh@887 -- # local bdev_name=7b8aad49-e73b-448f-92a1-19b8feb5f400 00:14:27.088 06:09:33 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:27.088 06:09:33 -- common/autotest_common.sh@889 -- # local i 00:14:27.088 06:09:33 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:27.088 06:09:33 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:27.088 06:09:33 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:27.088 06:09:33 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7b8aad49-e73b-448f-92a1-19b8feb5f400 -t 2000 00:14:27.346 [ 00:14:27.346 { 00:14:27.346 "name": "7b8aad49-e73b-448f-92a1-19b8feb5f400", 00:14:27.346 "aliases": [ 00:14:27.346 "lvs/lvol" 00:14:27.346 ], 00:14:27.346 "product_name": "Logical Volume", 00:14:27.346 "block_size": 4096, 00:14:27.346 "num_blocks": 38912, 00:14:27.346 "uuid": "7b8aad49-e73b-448f-92a1-19b8feb5f400", 00:14:27.346 "assigned_rate_limits": { 00:14:27.346 "rw_ios_per_sec": 0, 00:14:27.346 "rw_mbytes_per_sec": 0, 00:14:27.346 "r_mbytes_per_sec": 0, 00:14:27.346 "w_mbytes_per_sec": 0 00:14:27.346 }, 00:14:27.346 "claimed": false, 00:14:27.346 "zoned": false, 00:14:27.346 "supported_io_types": { 00:14:27.346 "read": true, 00:14:27.346 "write": true, 00:14:27.346 "unmap": true, 00:14:27.346 "write_zeroes": true, 00:14:27.346 "flush": false, 00:14:27.346 "reset": true, 00:14:27.346 "compare": false, 00:14:27.346 "compare_and_write": false, 00:14:27.346 "abort": false, 00:14:27.346 "nvme_admin": false, 00:14:27.346 "nvme_io": false 00:14:27.346 }, 00:14:27.346 "driver_specific": { 00:14:27.346 "lvol": { 00:14:27.346 "lvol_store_uuid": "a8e9ffcd-f339-4d14-ada6-30851e425551", 00:14:27.346 "base_bdev": "aio_bdev", 00:14:27.346 "thin_provision": false, 00:14:27.346 "snapshot": false, 00:14:27.346 "clone": false, 00:14:27.346 "esnap_clone": false 00:14:27.346 } 00:14:27.346 } 00:14:27.346 } 00:14:27.346 ] 00:14:27.346 06:09:33 -- common/autotest_common.sh@895 -- # return 0 00:14:27.346 06:09:33 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8e9ffcd-f339-4d14-ada6-30851e425551 00:14:27.346 06:09:33 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:27.603 06:09:34 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:27.603 06:09:34 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8e9ffcd-f339-4d14-ada6-30851e425551 00:14:27.603 06:09:34 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:27.860 06:09:34 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:27.860 06:09:34 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:28.118 [2024-07-13 06:09:34.524906] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:28.118 06:09:34 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8e9ffcd-f339-4d14-ada6-30851e425551 00:14:28.118 06:09:34 -- common/autotest_common.sh@640 -- # local es=0 00:14:28.118 06:09:34 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8e9ffcd-f339-4d14-ada6-30851e425551 00:14:28.118 06:09:34 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.118 06:09:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:28.118 06:09:34 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.118 06:09:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:28.118 06:09:34 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.118 06:09:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:28.118 06:09:34 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.118 06:09:34 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:28.118 06:09:34 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8e9ffcd-f339-4d14-ada6-30851e425551 00:14:28.376 request: 00:14:28.376 { 00:14:28.377 "uuid": "a8e9ffcd-f339-4d14-ada6-30851e425551", 00:14:28.377 "method": "bdev_lvol_get_lvstores", 00:14:28.377 "req_id": 1 00:14:28.377 } 00:14:28.377 Got JSON-RPC error response 00:14:28.377 response: 00:14:28.377 { 00:14:28.377 "code": -19, 00:14:28.377 "message": "No such device" 00:14:28.377 } 00:14:28.377 06:09:34 -- common/autotest_common.sh@643 -- # es=1 00:14:28.377 06:09:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:28.377 06:09:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:28.377 06:09:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:28.377 06:09:34 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:28.635 aio_bdev 00:14:28.635 06:09:35 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 7b8aad49-e73b-448f-92a1-19b8feb5f400 00:14:28.635 06:09:35 -- common/autotest_common.sh@887 -- # local bdev_name=7b8aad49-e73b-448f-92a1-19b8feb5f400 00:14:28.635 06:09:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:28.635 06:09:35 -- common/autotest_common.sh@889 -- # local i 00:14:28.635 06:09:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:28.635 06:09:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:28.635 06:09:35 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:28.893 06:09:35 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7b8aad49-e73b-448f-92a1-19b8feb5f400 -t 2000 00:14:29.151 [ 00:14:29.151 { 00:14:29.151 "name": "7b8aad49-e73b-448f-92a1-19b8feb5f400", 00:14:29.151 "aliases": [ 00:14:29.151 "lvs/lvol" 00:14:29.151 ], 00:14:29.151 "product_name": "Logical Volume", 00:14:29.151 "block_size": 4096, 00:14:29.151 "num_blocks": 38912, 00:14:29.151 "uuid": "7b8aad49-e73b-448f-92a1-19b8feb5f400", 00:14:29.151 "assigned_rate_limits": { 00:14:29.151 "rw_ios_per_sec": 0, 00:14:29.151 "rw_mbytes_per_sec": 0, 00:14:29.151 "r_mbytes_per_sec": 0, 00:14:29.151 "w_mbytes_per_sec": 0 00:14:29.151 }, 00:14:29.151 "claimed": false, 00:14:29.151 "zoned": false, 00:14:29.151 "supported_io_types": { 00:14:29.151 "read": true, 00:14:29.151 "write": true, 00:14:29.151 "unmap": true, 00:14:29.151 "write_zeroes": true, 00:14:29.151 "flush": false, 00:14:29.151 "reset": true, 00:14:29.151 "compare": false, 00:14:29.151 "compare_and_write": false, 00:14:29.151 "abort": false, 00:14:29.151 "nvme_admin": false, 00:14:29.151 "nvme_io": false 00:14:29.151 }, 00:14:29.151 "driver_specific": { 00:14:29.151 "lvol": { 00:14:29.151 "lvol_store_uuid": "a8e9ffcd-f339-4d14-ada6-30851e425551", 00:14:29.151 "base_bdev": "aio_bdev", 00:14:29.151 "thin_provision": false, 00:14:29.151 "snapshot": false, 00:14:29.151 "clone": false, 00:14:29.151 "esnap_clone": false 00:14:29.151 } 00:14:29.151 } 00:14:29.151 } 00:14:29.151 ] 00:14:29.151 06:09:35 -- common/autotest_common.sh@895 -- # return 0 00:14:29.151 06:09:35 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8e9ffcd-f339-4d14-ada6-30851e425551 00:14:29.151 06:09:35 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:29.409 06:09:35 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:29.409 06:09:35 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8e9ffcd-f339-4d14-ada6-30851e425551 00:14:29.409 06:09:35 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:29.667 06:09:35 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:29.667 06:09:35 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7b8aad49-e73b-448f-92a1-19b8feb5f400 00:14:29.925 06:09:36 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a8e9ffcd-f339-4d14-ada6-30851e425551 00:14:30.183 06:09:36 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:30.441 06:09:36 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:30.441 00:14:30.441 real 0m19.859s 00:14:30.441 user 0m49.925s 00:14:30.441 sys 0m4.731s 00:14:30.441 06:09:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:30.441 06:09:36 -- common/autotest_common.sh@10 -- # set +x 00:14:30.441 ************************************ 00:14:30.441 END TEST lvs_grow_dirty 00:14:30.441 ************************************ 00:14:30.441 06:09:36 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:30.441 06:09:36 -- common/autotest_common.sh@796 -- # type=--id 00:14:30.441 06:09:36 -- common/autotest_common.sh@797 -- # id=0 00:14:30.441 06:09:36 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:14:30.441 06:09:36 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:30.441 06:09:36 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:14:30.441 06:09:36 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:14:30.441 06:09:36 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:14:30.441 06:09:36 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:30.441 nvmf_trace.0 00:14:30.441 06:09:36 -- common/autotest_common.sh@811 -- # return 0 00:14:30.441 06:09:36 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:30.441 06:09:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:30.441 06:09:36 -- nvmf/common.sh@116 -- # sync 00:14:30.441 06:09:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:30.441 06:09:36 -- nvmf/common.sh@119 -- # set +e 00:14:30.441 06:09:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:30.441 06:09:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:30.441 rmmod nvme_tcp 00:14:30.441 rmmod nvme_fabrics 00:14:30.441 rmmod nvme_keyring 00:14:30.441 06:09:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:30.441 06:09:36 -- nvmf/common.sh@123 -- # set -e 00:14:30.441 06:09:36 -- nvmf/common.sh@124 -- # return 0 00:14:30.441 06:09:36 -- nvmf/common.sh@477 -- # '[' -n 1101609 ']' 00:14:30.441 06:09:36 -- nvmf/common.sh@478 -- # killprocess 1101609 00:14:30.441 06:09:36 -- common/autotest_common.sh@926 -- # '[' -z 1101609 ']' 00:14:30.441 06:09:36 -- common/autotest_common.sh@930 -- # kill -0 1101609 00:14:30.441 06:09:36 -- common/autotest_common.sh@931 -- # uname 00:14:30.441 06:09:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:30.441 06:09:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1101609 00:14:30.441 06:09:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:30.441 06:09:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:30.441 06:09:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1101609' 00:14:30.441 killing process with pid 1101609 00:14:30.441 06:09:36 -- common/autotest_common.sh@945 -- # kill 1101609 00:14:30.441 06:09:36 -- common/autotest_common.sh@950 -- # wait 1101609 00:14:30.699 06:09:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:30.699 06:09:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:30.699 06:09:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:30.699 06:09:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.699 06:09:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:30.699 06:09:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.699 06:09:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.699 06:09:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.263 06:09:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:33.263 00:14:33.263 real 0m43.315s 00:14:33.263 user 1m13.521s 00:14:33.263 sys 0m8.466s 00:14:33.263 06:09:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:33.263 06:09:39 -- common/autotest_common.sh@10 -- # set +x 00:14:33.263 ************************************ 00:14:33.263 END TEST nvmf_lvs_grow 00:14:33.263 ************************************ 00:14:33.263 06:09:39 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:33.263 06:09:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:33.263 06:09:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:33.263 06:09:39 -- common/autotest_common.sh@10 -- # set +x 00:14:33.263 ************************************ 00:14:33.263 START TEST nvmf_bdev_io_wait 00:14:33.263 ************************************ 00:14:33.263 06:09:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:33.263 * Looking for test storage... 00:14:33.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.263 06:09:39 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.263 06:09:39 -- nvmf/common.sh@7 -- # uname -s 00:14:33.263 06:09:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.263 06:09:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.263 06:09:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.263 06:09:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.263 06:09:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.263 06:09:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.263 06:09:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.263 06:09:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.263 06:09:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.263 06:09:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.263 06:09:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.263 06:09:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.263 06:09:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.263 06:09:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.263 06:09:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.263 06:09:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.263 06:09:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.263 06:09:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.264 06:09:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.264 06:09:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.264 06:09:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.264 06:09:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.264 06:09:39 -- paths/export.sh@5 -- # export PATH 00:14:33.264 06:09:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.264 06:09:39 -- nvmf/common.sh@46 -- # : 0 00:14:33.264 06:09:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:33.264 06:09:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:33.264 06:09:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:33.264 06:09:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.264 06:09:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.264 06:09:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:33.264 06:09:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:33.264 06:09:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:33.264 06:09:39 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:33.264 06:09:39 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:33.264 06:09:39 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:33.264 06:09:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:33.264 06:09:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.264 06:09:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:33.264 06:09:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:33.264 06:09:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:33.264 06:09:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.264 06:09:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.264 06:09:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.264 06:09:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:33.264 06:09:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:33.264 06:09:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:33.264 06:09:39 -- common/autotest_common.sh@10 -- # set +x 00:14:35.159 06:09:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:35.160 06:09:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:35.160 06:09:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:35.160 06:09:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:35.160 06:09:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:35.160 06:09:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:35.160 06:09:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:35.160 06:09:41 -- nvmf/common.sh@294 -- # net_devs=() 00:14:35.160 06:09:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:35.160 06:09:41 -- nvmf/common.sh@295 -- # e810=() 00:14:35.160 06:09:41 -- nvmf/common.sh@295 -- # local -ga e810 00:14:35.160 06:09:41 -- nvmf/common.sh@296 -- # x722=() 00:14:35.160 06:09:41 -- nvmf/common.sh@296 -- # local -ga x722 00:14:35.160 06:09:41 -- nvmf/common.sh@297 -- # mlx=() 00:14:35.160 06:09:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:35.160 06:09:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:35.160 06:09:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:35.160 06:09:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:35.160 06:09:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:35.160 06:09:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:35.160 06:09:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:35.160 06:09:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:35.160 06:09:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:35.160 06:09:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:35.160 06:09:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:35.160 06:09:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:35.160 06:09:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:35.160 06:09:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:35.160 06:09:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:35.160 06:09:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:35.160 06:09:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:35.160 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:35.160 06:09:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:35.160 06:09:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:35.160 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:35.160 06:09:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:35.160 06:09:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:35.160 06:09:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.160 06:09:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:35.160 06:09:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.160 06:09:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:35.160 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:35.160 06:09:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.160 06:09:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:35.160 06:09:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.160 06:09:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:35.160 06:09:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.160 06:09:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:35.160 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:35.160 06:09:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.160 06:09:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:35.160 06:09:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:35.160 06:09:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:35.160 06:09:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.160 06:09:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.160 06:09:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:35.160 06:09:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:35.160 06:09:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:35.160 06:09:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:35.160 06:09:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:35.160 06:09:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:35.160 06:09:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.160 06:09:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:35.160 06:09:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:35.160 06:09:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:35.160 06:09:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:35.160 06:09:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:35.160 06:09:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:35.160 06:09:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:35.160 06:09:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:35.160 06:09:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:35.160 06:09:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:35.160 06:09:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:35.160 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.160 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:14:35.160 00:14:35.160 --- 10.0.0.2 ping statistics --- 00:14:35.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.160 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:14:35.160 06:09:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:35.160 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.160 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:14:35.160 00:14:35.160 --- 10.0.0.1 ping statistics --- 00:14:35.160 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.160 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:14:35.160 06:09:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.160 06:09:41 -- nvmf/common.sh@410 -- # return 0 00:14:35.160 06:09:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:35.160 06:09:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.160 06:09:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:35.160 06:09:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.160 06:09:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:35.160 06:09:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:35.160 06:09:41 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:35.160 06:09:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:35.160 06:09:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:35.160 06:09:41 -- common/autotest_common.sh@10 -- # set +x 00:14:35.160 06:09:41 -- nvmf/common.sh@469 -- # nvmfpid=1104192 00:14:35.160 06:09:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:35.160 06:09:41 -- nvmf/common.sh@470 -- # waitforlisten 1104192 00:14:35.160 06:09:41 -- common/autotest_common.sh@819 -- # '[' -z 1104192 ']' 00:14:35.160 06:09:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.160 06:09:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:35.160 06:09:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.160 06:09:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:35.160 06:09:41 -- common/autotest_common.sh@10 -- # set +x 00:14:35.160 [2024-07-13 06:09:41.539712] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:35.160 [2024-07-13 06:09:41.539789] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.160 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.160 [2024-07-13 06:09:41.613759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.417 [2024-07-13 06:09:41.736092] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:35.417 [2024-07-13 06:09:41.736263] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.417 [2024-07-13 06:09:41.736283] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.417 [2024-07-13 06:09:41.736297] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.417 [2024-07-13 06:09:41.736358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.417 [2024-07-13 06:09:41.736409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.417 [2024-07-13 06:09:41.736460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:35.417 [2024-07-13 06:09:41.736463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.347 06:09:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:36.347 06:09:42 -- common/autotest_common.sh@852 -- # return 0 00:14:36.347 06:09:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:36.347 06:09:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:36.347 06:09:42 -- common/autotest_common.sh@10 -- # set +x 00:14:36.347 06:09:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:36.347 06:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.347 06:09:42 -- common/autotest_common.sh@10 -- # set +x 00:14:36.347 06:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:36.347 06:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.347 06:09:42 -- common/autotest_common.sh@10 -- # set +x 00:14:36.347 06:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:36.347 06:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.347 06:09:42 -- common/autotest_common.sh@10 -- # set +x 00:14:36.347 [2024-07-13 06:09:42.617517] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.347 06:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:36.347 06:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.347 06:09:42 -- common/autotest_common.sh@10 -- # set +x 00:14:36.347 Malloc0 00:14:36.347 06:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:36.347 06:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.347 06:09:42 -- common/autotest_common.sh@10 -- # set +x 00:14:36.347 06:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:36.347 06:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.347 06:09:42 -- common/autotest_common.sh@10 -- # set +x 00:14:36.347 06:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.347 06:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.347 06:09:42 -- common/autotest_common.sh@10 -- # set +x 00:14:36.347 [2024-07-13 06:09:42.678214] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.347 06:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1104403 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@30 -- # READ_PID=1104406 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:36.347 06:09:42 -- nvmf/common.sh@520 -- # config=() 00:14:36.347 06:09:42 -- nvmf/common.sh@520 -- # local subsystem config 00:14:36.347 06:09:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:36.347 06:09:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:36.347 { 00:14:36.347 "params": { 00:14:36.347 "name": "Nvme$subsystem", 00:14:36.347 "trtype": "$TEST_TRANSPORT", 00:14:36.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:36.347 "adrfam": "ipv4", 00:14:36.347 "trsvcid": "$NVMF_PORT", 00:14:36.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:36.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:36.347 "hdgst": ${hdgst:-false}, 00:14:36.347 "ddgst": ${ddgst:-false} 00:14:36.347 }, 00:14:36.347 "method": "bdev_nvme_attach_controller" 00:14:36.347 } 00:14:36.347 EOF 00:14:36.347 )") 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1104408 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:36.347 06:09:42 -- nvmf/common.sh@520 -- # config=() 00:14:36.347 06:09:42 -- nvmf/common.sh@520 -- # local subsystem config 00:14:36.347 06:09:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:36.347 06:09:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:36.347 { 00:14:36.347 "params": { 00:14:36.347 "name": "Nvme$subsystem", 00:14:36.347 "trtype": "$TEST_TRANSPORT", 00:14:36.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:36.347 "adrfam": "ipv4", 00:14:36.347 "trsvcid": "$NVMF_PORT", 00:14:36.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:36.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:36.347 "hdgst": ${hdgst:-false}, 00:14:36.347 "ddgst": ${ddgst:-false} 00:14:36.347 }, 00:14:36.347 "method": "bdev_nvme_attach_controller" 00:14:36.347 } 00:14:36.347 EOF 00:14:36.347 )") 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1104412 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@35 -- # sync 00:14:36.347 06:09:42 -- nvmf/common.sh@542 -- # cat 00:14:36.347 06:09:42 -- nvmf/common.sh@520 -- # config=() 00:14:36.347 06:09:42 -- nvmf/common.sh@520 -- # local subsystem config 00:14:36.347 06:09:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:36.347 06:09:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:36.347 { 00:14:36.347 "params": { 00:14:36.347 "name": "Nvme$subsystem", 00:14:36.347 "trtype": "$TEST_TRANSPORT", 00:14:36.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:36.347 "adrfam": "ipv4", 00:14:36.347 "trsvcid": "$NVMF_PORT", 00:14:36.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:36.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:36.347 "hdgst": ${hdgst:-false}, 00:14:36.347 "ddgst": ${ddgst:-false} 00:14:36.347 }, 00:14:36.347 "method": "bdev_nvme_attach_controller" 00:14:36.347 } 00:14:36.347 EOF 00:14:36.347 )") 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:36.347 06:09:42 -- nvmf/common.sh@520 -- # config=() 00:14:36.347 06:09:42 -- nvmf/common.sh@542 -- # cat 00:14:36.347 06:09:42 -- nvmf/common.sh@520 -- # local subsystem config 00:14:36.347 06:09:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:36.347 06:09:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:36.347 { 00:14:36.347 "params": { 00:14:36.347 "name": "Nvme$subsystem", 00:14:36.347 "trtype": "$TEST_TRANSPORT", 00:14:36.347 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:36.347 "adrfam": "ipv4", 00:14:36.347 "trsvcid": "$NVMF_PORT", 00:14:36.347 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:36.347 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:36.347 "hdgst": ${hdgst:-false}, 00:14:36.347 "ddgst": ${ddgst:-false} 00:14:36.347 }, 00:14:36.347 "method": "bdev_nvme_attach_controller" 00:14:36.347 } 00:14:36.347 EOF 00:14:36.347 )") 00:14:36.347 06:09:42 -- nvmf/common.sh@542 -- # cat 00:14:36.347 06:09:42 -- target/bdev_io_wait.sh@37 -- # wait 1104403 00:14:36.347 06:09:42 -- nvmf/common.sh@542 -- # cat 00:14:36.347 06:09:42 -- nvmf/common.sh@544 -- # jq . 00:14:36.347 06:09:42 -- nvmf/common.sh@544 -- # jq . 00:14:36.347 06:09:42 -- nvmf/common.sh@544 -- # jq . 00:14:36.347 06:09:42 -- nvmf/common.sh@545 -- # IFS=, 00:14:36.347 06:09:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:36.347 "params": { 00:14:36.347 "name": "Nvme1", 00:14:36.347 "trtype": "tcp", 00:14:36.347 "traddr": "10.0.0.2", 00:14:36.347 "adrfam": "ipv4", 00:14:36.347 "trsvcid": "4420", 00:14:36.347 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.347 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:36.347 "hdgst": false, 00:14:36.347 "ddgst": false 00:14:36.347 }, 00:14:36.347 "method": "bdev_nvme_attach_controller" 00:14:36.348 }' 00:14:36.348 06:09:42 -- nvmf/common.sh@544 -- # jq . 00:14:36.348 06:09:42 -- nvmf/common.sh@545 -- # IFS=, 00:14:36.348 06:09:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:36.348 "params": { 00:14:36.348 "name": "Nvme1", 00:14:36.348 "trtype": "tcp", 00:14:36.348 "traddr": "10.0.0.2", 00:14:36.348 "adrfam": "ipv4", 00:14:36.348 "trsvcid": "4420", 00:14:36.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.348 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:36.348 "hdgst": false, 00:14:36.348 "ddgst": false 00:14:36.348 }, 00:14:36.348 "method": "bdev_nvme_attach_controller" 00:14:36.348 }' 00:14:36.348 06:09:42 -- nvmf/common.sh@545 -- # IFS=, 00:14:36.348 06:09:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:36.348 "params": { 00:14:36.348 "name": "Nvme1", 00:14:36.348 "trtype": "tcp", 00:14:36.348 "traddr": "10.0.0.2", 00:14:36.348 "adrfam": "ipv4", 00:14:36.348 "trsvcid": "4420", 00:14:36.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.348 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:36.348 "hdgst": false, 00:14:36.348 "ddgst": false 00:14:36.348 }, 00:14:36.348 "method": "bdev_nvme_attach_controller" 00:14:36.348 }' 00:14:36.348 06:09:42 -- nvmf/common.sh@545 -- # IFS=, 00:14:36.348 06:09:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:36.348 "params": { 00:14:36.348 "name": "Nvme1", 00:14:36.348 "trtype": "tcp", 00:14:36.348 "traddr": "10.0.0.2", 00:14:36.348 "adrfam": "ipv4", 00:14:36.348 "trsvcid": "4420", 00:14:36.348 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:36.348 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:36.348 "hdgst": false, 00:14:36.348 "ddgst": false 00:14:36.348 }, 00:14:36.348 "method": "bdev_nvme_attach_controller" 00:14:36.348 }' 00:14:36.348 [2024-07-13 06:09:42.720929] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:36.348 [2024-07-13 06:09:42.720929] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:36.348 [2024-07-13 06:09:42.720928] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:36.348 [2024-07-13 06:09:42.721016] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-13 06:09:42.721017] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-13 06:09:42.721017] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:36.348 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:36.348 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:36.348 [2024-07-13 06:09:42.722999] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:36.348 [2024-07-13 06:09:42.723056] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:36.348 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.604 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.604 [2024-07-13 06:09:42.888113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.604 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.604 [2024-07-13 06:09:42.984246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:36.604 [2024-07-13 06:09:42.987748] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.604 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.604 [2024-07-13 06:09:43.083327] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:36.604 [2024-07-13 06:09:43.089471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.861 [2024-07-13 06:09:43.185070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:36.861 [2024-07-13 06:09:43.192024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.861 [2024-07-13 06:09:43.281545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:37.117 Running I/O for 1 seconds... 00:14:37.117 Running I/O for 1 seconds... 00:14:37.117 Running I/O for 1 seconds... 00:14:37.117 Running I/O for 1 seconds... 00:14:38.048 00:14:38.048 Latency(us) 00:14:38.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.048 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:38.048 Nvme1n1 : 1.00 196185.02 766.35 0.00 0.00 649.89 248.79 861.68 00:14:38.048 =================================================================================================================== 00:14:38.048 Total : 196185.02 766.35 0.00 0.00 649.89 248.79 861.68 00:14:38.048 00:14:38.048 Latency(us) 00:14:38.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.048 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:38.048 Nvme1n1 : 1.02 7499.72 29.30 0.00 0.00 16853.19 8689.59 27573.67 00:14:38.048 =================================================================================================================== 00:14:38.048 Total : 7499.72 29.30 0.00 0.00 16853.19 8689.59 27573.67 00:14:38.048 00:14:38.048 Latency(us) 00:14:38.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.048 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:38.048 Nvme1n1 : 1.01 10008.74 39.10 0.00 0.00 12731.08 8204.14 22524.97 00:14:38.048 =================================================================================================================== 00:14:38.048 Total : 10008.74 39.10 0.00 0.00 12731.08 8204.14 22524.97 00:14:38.306 00:14:38.306 Latency(us) 00:14:38.306 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.306 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:38.306 Nvme1n1 : 1.00 7880.01 30.78 0.00 0.00 16201.20 4369.07 43690.67 00:14:38.306 =================================================================================================================== 00:14:38.306 Total : 7880.01 30.78 0.00 0.00 16201.20 4369.07 43690.67 00:14:38.306 06:09:44 -- target/bdev_io_wait.sh@38 -- # wait 1104406 00:14:38.306 06:09:44 -- target/bdev_io_wait.sh@39 -- # wait 1104408 00:14:38.306 06:09:44 -- target/bdev_io_wait.sh@40 -- # wait 1104412 00:14:38.564 06:09:44 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.564 06:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.564 06:09:44 -- common/autotest_common.sh@10 -- # set +x 00:14:38.564 06:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.564 06:09:44 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:38.564 06:09:44 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:38.564 06:09:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:38.564 06:09:44 -- nvmf/common.sh@116 -- # sync 00:14:38.564 06:09:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:38.564 06:09:44 -- nvmf/common.sh@119 -- # set +e 00:14:38.564 06:09:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:38.564 06:09:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:38.564 rmmod nvme_tcp 00:14:38.565 rmmod nvme_fabrics 00:14:38.565 rmmod nvme_keyring 00:14:38.565 06:09:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:38.565 06:09:44 -- nvmf/common.sh@123 -- # set -e 00:14:38.565 06:09:44 -- nvmf/common.sh@124 -- # return 0 00:14:38.565 06:09:44 -- nvmf/common.sh@477 -- # '[' -n 1104192 ']' 00:14:38.565 06:09:44 -- nvmf/common.sh@478 -- # killprocess 1104192 00:14:38.565 06:09:44 -- common/autotest_common.sh@926 -- # '[' -z 1104192 ']' 00:14:38.565 06:09:44 -- common/autotest_common.sh@930 -- # kill -0 1104192 00:14:38.565 06:09:44 -- common/autotest_common.sh@931 -- # uname 00:14:38.565 06:09:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:38.565 06:09:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1104192 00:14:38.565 06:09:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:38.565 06:09:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:38.565 06:09:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1104192' 00:14:38.565 killing process with pid 1104192 00:14:38.565 06:09:44 -- common/autotest_common.sh@945 -- # kill 1104192 00:14:38.565 06:09:44 -- common/autotest_common.sh@950 -- # wait 1104192 00:14:38.823 06:09:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:38.823 06:09:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:38.823 06:09:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:38.823 06:09:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:38.823 06:09:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:38.823 06:09:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.823 06:09:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.823 06:09:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.355 06:09:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:41.355 00:14:41.355 real 0m8.103s 00:14:41.355 user 0m19.488s 00:14:41.355 sys 0m3.780s 00:14:41.355 06:09:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.355 06:09:47 -- common/autotest_common.sh@10 -- # set +x 00:14:41.355 ************************************ 00:14:41.355 END TEST nvmf_bdev_io_wait 00:14:41.355 ************************************ 00:14:41.355 06:09:47 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:41.355 06:09:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:41.355 06:09:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:41.355 06:09:47 -- common/autotest_common.sh@10 -- # set +x 00:14:41.355 ************************************ 00:14:41.355 START TEST nvmf_queue_depth 00:14:41.355 ************************************ 00:14:41.355 06:09:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:41.355 * Looking for test storage... 00:14:41.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.355 06:09:47 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.355 06:09:47 -- nvmf/common.sh@7 -- # uname -s 00:14:41.355 06:09:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.355 06:09:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.355 06:09:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.355 06:09:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.355 06:09:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.355 06:09:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.355 06:09:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.355 06:09:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.355 06:09:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.355 06:09:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.355 06:09:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:41.355 06:09:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:41.355 06:09:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.355 06:09:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.355 06:09:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.355 06:09:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.355 06:09:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.355 06:09:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.356 06:09:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.356 06:09:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.356 06:09:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.356 06:09:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.356 06:09:47 -- paths/export.sh@5 -- # export PATH 00:14:41.356 06:09:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.356 06:09:47 -- nvmf/common.sh@46 -- # : 0 00:14:41.356 06:09:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:41.356 06:09:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:41.356 06:09:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:41.356 06:09:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.356 06:09:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.356 06:09:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:41.356 06:09:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:41.356 06:09:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:41.356 06:09:47 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:41.356 06:09:47 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:41.356 06:09:47 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:41.356 06:09:47 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:41.356 06:09:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:41.356 06:09:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.356 06:09:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:41.356 06:09:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:41.356 06:09:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:41.356 06:09:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.356 06:09:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.356 06:09:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.356 06:09:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:41.356 06:09:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:41.356 06:09:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:41.356 06:09:47 -- common/autotest_common.sh@10 -- # set +x 00:14:43.263 06:09:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:43.263 06:09:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:43.263 06:09:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:43.263 06:09:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:43.263 06:09:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:43.263 06:09:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:43.263 06:09:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:43.263 06:09:49 -- nvmf/common.sh@294 -- # net_devs=() 00:14:43.263 06:09:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:43.263 06:09:49 -- nvmf/common.sh@295 -- # e810=() 00:14:43.263 06:09:49 -- nvmf/common.sh@295 -- # local -ga e810 00:14:43.263 06:09:49 -- nvmf/common.sh@296 -- # x722=() 00:14:43.263 06:09:49 -- nvmf/common.sh@296 -- # local -ga x722 00:14:43.263 06:09:49 -- nvmf/common.sh@297 -- # mlx=() 00:14:43.263 06:09:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:43.263 06:09:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.263 06:09:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.263 06:09:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.263 06:09:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.263 06:09:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.263 06:09:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.263 06:09:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.263 06:09:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.263 06:09:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.263 06:09:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.263 06:09:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.263 06:09:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:43.263 06:09:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:43.263 06:09:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:43.263 06:09:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:43.263 06:09:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:43.263 06:09:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:43.264 06:09:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:43.264 06:09:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:43.264 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:43.264 06:09:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:43.264 06:09:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:43.264 06:09:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.264 06:09:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.264 06:09:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:43.264 06:09:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:43.264 06:09:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:43.264 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:43.264 06:09:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:43.264 06:09:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:43.264 06:09:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.264 06:09:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.264 06:09:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:43.264 06:09:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:43.264 06:09:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:43.264 06:09:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:43.264 06:09:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:43.264 06:09:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.264 06:09:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:43.264 06:09:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.264 06:09:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:43.264 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:43.264 06:09:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.264 06:09:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:43.264 06:09:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.264 06:09:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:43.264 06:09:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.264 06:09:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:43.264 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:43.264 06:09:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.264 06:09:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:43.264 06:09:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:43.264 06:09:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:43.264 06:09:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:43.264 06:09:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:43.264 06:09:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.264 06:09:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.264 06:09:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.264 06:09:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:43.264 06:09:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.264 06:09:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.264 06:09:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:43.264 06:09:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.264 06:09:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.264 06:09:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:43.264 06:09:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:43.264 06:09:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.264 06:09:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.264 06:09:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.264 06:09:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.264 06:09:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:43.264 06:09:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.264 06:09:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.264 06:09:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.264 06:09:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:43.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:14:43.264 00:14:43.264 --- 10.0.0.2 ping statistics --- 00:14:43.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.264 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:14:43.264 06:09:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:14:43.264 00:14:43.264 --- 10.0.0.1 ping statistics --- 00:14:43.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.264 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:14:43.264 06:09:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.264 06:09:49 -- nvmf/common.sh@410 -- # return 0 00:14:43.264 06:09:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:43.264 06:09:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.264 06:09:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:43.264 06:09:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:43.264 06:09:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.264 06:09:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:43.264 06:09:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:43.264 06:09:49 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:43.264 06:09:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:43.264 06:09:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:43.264 06:09:49 -- common/autotest_common.sh@10 -- # set +x 00:14:43.264 06:09:49 -- nvmf/common.sh@469 -- # nvmfpid=1106690 00:14:43.264 06:09:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:43.264 06:09:49 -- nvmf/common.sh@470 -- # waitforlisten 1106690 00:14:43.264 06:09:49 -- common/autotest_common.sh@819 -- # '[' -z 1106690 ']' 00:14:43.264 06:09:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.264 06:09:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:43.264 06:09:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.264 06:09:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:43.264 06:09:49 -- common/autotest_common.sh@10 -- # set +x 00:14:43.264 [2024-07-13 06:09:49.682996] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:43.264 [2024-07-13 06:09:49.683089] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.264 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.264 [2024-07-13 06:09:49.747398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.522 [2024-07-13 06:09:49.857359] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:43.522 [2024-07-13 06:09:49.857518] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.522 [2024-07-13 06:09:49.857537] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.522 [2024-07-13 06:09:49.857551] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.522 [2024-07-13 06:09:49.857585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.456 06:09:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:44.456 06:09:50 -- common/autotest_common.sh@852 -- # return 0 00:14:44.456 06:09:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:44.456 06:09:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:44.456 06:09:50 -- common/autotest_common.sh@10 -- # set +x 00:14:44.456 06:09:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.456 06:09:50 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:44.456 06:09:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.456 06:09:50 -- common/autotest_common.sh@10 -- # set +x 00:14:44.456 [2024-07-13 06:09:50.649985] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.456 06:09:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.456 06:09:50 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:44.456 06:09:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.456 06:09:50 -- common/autotest_common.sh@10 -- # set +x 00:14:44.456 Malloc0 00:14:44.456 06:09:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.456 06:09:50 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:44.456 06:09:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.456 06:09:50 -- common/autotest_common.sh@10 -- # set +x 00:14:44.456 06:09:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.456 06:09:50 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:44.456 06:09:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.456 06:09:50 -- common/autotest_common.sh@10 -- # set +x 00:14:44.456 06:09:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.456 06:09:50 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.456 06:09:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.456 06:09:50 -- common/autotest_common.sh@10 -- # set +x 00:14:44.456 [2024-07-13 06:09:50.709763] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.456 06:09:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.456 06:09:50 -- target/queue_depth.sh@30 -- # bdevperf_pid=1106848 00:14:44.456 06:09:50 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:44.456 06:09:50 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:44.456 06:09:50 -- target/queue_depth.sh@33 -- # waitforlisten 1106848 /var/tmp/bdevperf.sock 00:14:44.456 06:09:50 -- common/autotest_common.sh@819 -- # '[' -z 1106848 ']' 00:14:44.456 06:09:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.456 06:09:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:44.456 06:09:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.456 06:09:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:44.456 06:09:50 -- common/autotest_common.sh@10 -- # set +x 00:14:44.456 [2024-07-13 06:09:50.750767] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:44.456 [2024-07-13 06:09:50.750840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1106848 ] 00:14:44.456 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.456 [2024-07-13 06:09:50.812544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.456 [2024-07-13 06:09:50.925817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.388 06:09:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:45.388 06:09:51 -- common/autotest_common.sh@852 -- # return 0 00:14:45.388 06:09:51 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:45.388 06:09:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:45.388 06:09:51 -- common/autotest_common.sh@10 -- # set +x 00:14:45.646 NVMe0n1 00:14:45.646 06:09:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:45.646 06:09:51 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:45.646 Running I/O for 10 seconds... 00:14:55.617 00:14:55.617 Latency(us) 00:14:55.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.617 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:14:55.617 Verification LBA range: start 0x0 length 0x4000 00:14:55.617 NVMe0n1 : 10.07 12357.60 48.27 0.00 0.00 82535.66 14951.92 77283.93 00:14:55.617 =================================================================================================================== 00:14:55.617 Total : 12357.60 48.27 0.00 0.00 82535.66 14951.92 77283.93 00:14:55.617 0 00:14:55.617 06:10:02 -- target/queue_depth.sh@39 -- # killprocess 1106848 00:14:55.617 06:10:02 -- common/autotest_common.sh@926 -- # '[' -z 1106848 ']' 00:14:55.617 06:10:02 -- common/autotest_common.sh@930 -- # kill -0 1106848 00:14:55.617 06:10:02 -- common/autotest_common.sh@931 -- # uname 00:14:55.617 06:10:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:55.617 06:10:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1106848 00:14:55.875 06:10:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:55.875 06:10:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:55.875 06:10:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1106848' 00:14:55.875 killing process with pid 1106848 00:14:55.875 06:10:02 -- common/autotest_common.sh@945 -- # kill 1106848 00:14:55.875 Received shutdown signal, test time was about 10.000000 seconds 00:14:55.875 00:14:55.875 Latency(us) 00:14:55.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.875 =================================================================================================================== 00:14:55.875 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:55.875 06:10:02 -- common/autotest_common.sh@950 -- # wait 1106848 00:14:56.132 06:10:02 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:56.132 06:10:02 -- target/queue_depth.sh@43 -- # nvmftestfini 00:14:56.132 06:10:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:56.132 06:10:02 -- nvmf/common.sh@116 -- # sync 00:14:56.132 06:10:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:56.132 06:10:02 -- nvmf/common.sh@119 -- # set +e 00:14:56.132 06:10:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:56.132 06:10:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:56.132 rmmod nvme_tcp 00:14:56.132 rmmod nvme_fabrics 00:14:56.132 rmmod nvme_keyring 00:14:56.132 06:10:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:56.132 06:10:02 -- nvmf/common.sh@123 -- # set -e 00:14:56.132 06:10:02 -- nvmf/common.sh@124 -- # return 0 00:14:56.132 06:10:02 -- nvmf/common.sh@477 -- # '[' -n 1106690 ']' 00:14:56.132 06:10:02 -- nvmf/common.sh@478 -- # killprocess 1106690 00:14:56.132 06:10:02 -- common/autotest_common.sh@926 -- # '[' -z 1106690 ']' 00:14:56.132 06:10:02 -- common/autotest_common.sh@930 -- # kill -0 1106690 00:14:56.132 06:10:02 -- common/autotest_common.sh@931 -- # uname 00:14:56.132 06:10:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:56.132 06:10:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1106690 00:14:56.132 06:10:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:56.132 06:10:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:56.132 06:10:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1106690' 00:14:56.132 killing process with pid 1106690 00:14:56.132 06:10:02 -- common/autotest_common.sh@945 -- # kill 1106690 00:14:56.132 06:10:02 -- common/autotest_common.sh@950 -- # wait 1106690 00:14:56.391 06:10:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:56.391 06:10:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:56.391 06:10:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:56.391 06:10:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.391 06:10:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:56.391 06:10:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.391 06:10:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.391 06:10:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.925 06:10:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:58.925 00:14:58.925 real 0m17.494s 00:14:58.925 user 0m25.000s 00:14:58.925 sys 0m3.193s 00:14:58.925 06:10:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:58.925 06:10:04 -- common/autotest_common.sh@10 -- # set +x 00:14:58.925 ************************************ 00:14:58.925 END TEST nvmf_queue_depth 00:14:58.925 ************************************ 00:14:58.925 06:10:04 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:58.925 06:10:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:58.925 06:10:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:58.925 06:10:04 -- common/autotest_common.sh@10 -- # set +x 00:14:58.925 ************************************ 00:14:58.925 START TEST nvmf_multipath 00:14:58.925 ************************************ 00:14:58.925 06:10:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:14:58.925 * Looking for test storage... 00:14:58.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:58.925 06:10:04 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:58.925 06:10:04 -- nvmf/common.sh@7 -- # uname -s 00:14:58.925 06:10:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:58.925 06:10:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:58.925 06:10:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:58.925 06:10:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:58.925 06:10:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:58.925 06:10:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:58.925 06:10:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:58.925 06:10:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:58.925 06:10:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:58.925 06:10:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:58.925 06:10:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:58.925 06:10:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:58.925 06:10:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:58.925 06:10:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:58.925 06:10:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:58.925 06:10:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:58.925 06:10:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:58.925 06:10:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:58.925 06:10:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:58.925 06:10:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.925 06:10:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.925 06:10:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.925 06:10:04 -- paths/export.sh@5 -- # export PATH 00:14:58.925 06:10:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:58.925 06:10:04 -- nvmf/common.sh@46 -- # : 0 00:14:58.925 06:10:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:58.925 06:10:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:58.925 06:10:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:58.925 06:10:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:58.925 06:10:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:58.925 06:10:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:58.925 06:10:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:58.925 06:10:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:58.925 06:10:04 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:58.925 06:10:04 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:58.925 06:10:04 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:14:58.925 06:10:04 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:58.925 06:10:04 -- target/multipath.sh@43 -- # nvmftestinit 00:14:58.925 06:10:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:58.925 06:10:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:58.925 06:10:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:58.925 06:10:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:58.925 06:10:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:58.925 06:10:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.925 06:10:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.925 06:10:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:58.925 06:10:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:58.925 06:10:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:58.925 06:10:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:58.925 06:10:04 -- common/autotest_common.sh@10 -- # set +x 00:15:00.824 06:10:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:00.824 06:10:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:00.824 06:10:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:00.824 06:10:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:00.824 06:10:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:00.824 06:10:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:00.824 06:10:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:00.824 06:10:06 -- nvmf/common.sh@294 -- # net_devs=() 00:15:00.824 06:10:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:00.824 06:10:06 -- nvmf/common.sh@295 -- # e810=() 00:15:00.824 06:10:06 -- nvmf/common.sh@295 -- # local -ga e810 00:15:00.824 06:10:06 -- nvmf/common.sh@296 -- # x722=() 00:15:00.824 06:10:06 -- nvmf/common.sh@296 -- # local -ga x722 00:15:00.824 06:10:06 -- nvmf/common.sh@297 -- # mlx=() 00:15:00.824 06:10:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:00.824 06:10:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:00.824 06:10:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:00.824 06:10:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:00.824 06:10:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:00.824 06:10:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:00.824 06:10:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:00.824 06:10:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:00.824 06:10:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:00.824 06:10:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:00.824 06:10:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:00.824 06:10:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:00.824 06:10:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:00.824 06:10:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:00.824 06:10:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:00.824 06:10:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:00.824 06:10:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:00.824 06:10:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:00.824 06:10:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:00.824 06:10:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:00.824 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:00.824 06:10:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:00.824 06:10:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:00.824 06:10:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.824 06:10:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.824 06:10:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:00.824 06:10:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:00.824 06:10:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:00.824 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:00.824 06:10:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:00.824 06:10:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:00.824 06:10:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.824 06:10:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.824 06:10:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:00.824 06:10:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:00.824 06:10:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:00.824 06:10:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:00.824 06:10:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:00.825 06:10:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.825 06:10:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:00.825 06:10:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.825 06:10:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:00.825 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:00.825 06:10:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.825 06:10:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:00.825 06:10:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.825 06:10:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:00.825 06:10:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.825 06:10:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:00.825 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:00.825 06:10:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.825 06:10:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:00.825 06:10:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:00.825 06:10:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:00.825 06:10:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:00.825 06:10:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:00.825 06:10:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.825 06:10:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.825 06:10:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:00.825 06:10:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:00.825 06:10:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:00.825 06:10:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:00.825 06:10:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:00.825 06:10:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:00.825 06:10:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.825 06:10:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:00.825 06:10:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:00.825 06:10:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:00.825 06:10:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:00.825 06:10:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:00.825 06:10:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:00.825 06:10:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:00.825 06:10:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:00.825 06:10:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:00.825 06:10:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:00.825 06:10:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:00.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:15:00.825 00:15:00.825 --- 10.0.0.2 ping statistics --- 00:15:00.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.825 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:15:00.825 06:10:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:00.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:15:00.825 00:15:00.825 --- 10.0.0.1 ping statistics --- 00:15:00.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.825 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:15:00.825 06:10:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.825 06:10:07 -- nvmf/common.sh@410 -- # return 0 00:15:00.825 06:10:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:00.825 06:10:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.825 06:10:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:00.825 06:10:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:00.825 06:10:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.825 06:10:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:00.825 06:10:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:00.825 06:10:07 -- target/multipath.sh@45 -- # '[' -z ']' 00:15:00.825 06:10:07 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:00.825 only one NIC for nvmf test 00:15:00.825 06:10:07 -- target/multipath.sh@47 -- # nvmftestfini 00:15:00.825 06:10:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:00.825 06:10:07 -- nvmf/common.sh@116 -- # sync 00:15:00.825 06:10:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:00.825 06:10:07 -- nvmf/common.sh@119 -- # set +e 00:15:00.825 06:10:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:00.825 06:10:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:00.825 rmmod nvme_tcp 00:15:00.825 rmmod nvme_fabrics 00:15:00.825 rmmod nvme_keyring 00:15:00.825 06:10:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:00.825 06:10:07 -- nvmf/common.sh@123 -- # set -e 00:15:00.825 06:10:07 -- nvmf/common.sh@124 -- # return 0 00:15:00.825 06:10:07 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:15:00.825 06:10:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:00.825 06:10:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:00.825 06:10:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:00.825 06:10:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:00.825 06:10:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:00.825 06:10:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.825 06:10:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.825 06:10:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.725 06:10:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:02.725 06:10:09 -- target/multipath.sh@48 -- # exit 0 00:15:02.725 06:10:09 -- target/multipath.sh@1 -- # nvmftestfini 00:15:02.725 06:10:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:02.725 06:10:09 -- nvmf/common.sh@116 -- # sync 00:15:02.725 06:10:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:02.725 06:10:09 -- nvmf/common.sh@119 -- # set +e 00:15:02.725 06:10:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:02.725 06:10:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:02.725 06:10:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:02.725 06:10:09 -- nvmf/common.sh@123 -- # set -e 00:15:02.725 06:10:09 -- nvmf/common.sh@124 -- # return 0 00:15:02.725 06:10:09 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:15:02.725 06:10:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:02.725 06:10:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:02.725 06:10:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:02.725 06:10:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.725 06:10:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:02.725 06:10:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.725 06:10:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.725 06:10:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.725 06:10:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:02.725 00:15:02.725 real 0m4.364s 00:15:02.725 user 0m0.837s 00:15:02.725 sys 0m1.499s 00:15:02.725 06:10:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.725 06:10:09 -- common/autotest_common.sh@10 -- # set +x 00:15:02.725 ************************************ 00:15:02.725 END TEST nvmf_multipath 00:15:02.725 ************************************ 00:15:02.984 06:10:09 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:02.984 06:10:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:02.984 06:10:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:02.984 06:10:09 -- common/autotest_common.sh@10 -- # set +x 00:15:02.984 ************************************ 00:15:02.984 START TEST nvmf_zcopy 00:15:02.984 ************************************ 00:15:02.984 06:10:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:02.984 * Looking for test storage... 00:15:02.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:02.984 06:10:09 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.984 06:10:09 -- nvmf/common.sh@7 -- # uname -s 00:15:02.984 06:10:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.984 06:10:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.984 06:10:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.984 06:10:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.984 06:10:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.984 06:10:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.984 06:10:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.984 06:10:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.984 06:10:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.984 06:10:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.984 06:10:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.984 06:10:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.984 06:10:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.984 06:10:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.984 06:10:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.984 06:10:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.984 06:10:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.984 06:10:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.984 06:10:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.984 06:10:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.984 06:10:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.984 06:10:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.984 06:10:09 -- paths/export.sh@5 -- # export PATH 00:15:02.984 06:10:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.984 06:10:09 -- nvmf/common.sh@46 -- # : 0 00:15:02.984 06:10:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:02.984 06:10:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:02.984 06:10:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:02.984 06:10:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.984 06:10:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.984 06:10:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:02.984 06:10:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:02.984 06:10:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:02.984 06:10:09 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:02.984 06:10:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:02.984 06:10:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.984 06:10:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:02.984 06:10:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:02.984 06:10:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:02.984 06:10:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.984 06:10:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.984 06:10:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.984 06:10:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:02.984 06:10:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:02.984 06:10:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:02.984 06:10:09 -- common/autotest_common.sh@10 -- # set +x 00:15:04.884 06:10:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:04.884 06:10:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:04.884 06:10:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:04.884 06:10:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:04.884 06:10:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:04.884 06:10:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:04.884 06:10:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:04.884 06:10:11 -- nvmf/common.sh@294 -- # net_devs=() 00:15:04.884 06:10:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:04.884 06:10:11 -- nvmf/common.sh@295 -- # e810=() 00:15:04.884 06:10:11 -- nvmf/common.sh@295 -- # local -ga e810 00:15:04.884 06:10:11 -- nvmf/common.sh@296 -- # x722=() 00:15:04.884 06:10:11 -- nvmf/common.sh@296 -- # local -ga x722 00:15:04.884 06:10:11 -- nvmf/common.sh@297 -- # mlx=() 00:15:04.884 06:10:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:04.884 06:10:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:04.884 06:10:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:04.884 06:10:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:04.884 06:10:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:04.884 06:10:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:04.885 06:10:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:04.885 06:10:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:04.885 06:10:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:04.885 06:10:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:04.885 06:10:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:04.885 06:10:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:04.885 06:10:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:04.885 06:10:11 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:04.885 06:10:11 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:04.885 06:10:11 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:04.885 06:10:11 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:04.885 06:10:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:04.885 06:10:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:04.885 06:10:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:04.885 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:04.885 06:10:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:04.885 06:10:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:04.885 06:10:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.885 06:10:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.885 06:10:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:04.885 06:10:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:04.885 06:10:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:04.885 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:04.885 06:10:11 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:04.885 06:10:11 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:04.885 06:10:11 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:04.885 06:10:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:04.885 06:10:11 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:04.885 06:10:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:04.885 06:10:11 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:04.885 06:10:11 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:04.885 06:10:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:04.885 06:10:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.885 06:10:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:04.885 06:10:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.885 06:10:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:04.885 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:04.885 06:10:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.885 06:10:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:04.885 06:10:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:04.885 06:10:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:04.885 06:10:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:04.885 06:10:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:04.885 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:04.885 06:10:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:04.885 06:10:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:04.885 06:10:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:04.885 06:10:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:04.885 06:10:11 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:04.885 06:10:11 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:04.885 06:10:11 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:04.885 06:10:11 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:04.885 06:10:11 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:04.885 06:10:11 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:04.885 06:10:11 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:04.885 06:10:11 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:04.885 06:10:11 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:04.885 06:10:11 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:04.885 06:10:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:04.885 06:10:11 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:04.885 06:10:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:04.885 06:10:11 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:04.885 06:10:11 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:04.885 06:10:11 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:04.885 06:10:11 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:04.885 06:10:11 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:04.885 06:10:11 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:04.885 06:10:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:04.885 06:10:11 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:04.885 06:10:11 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:05.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:05.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:15:05.143 00:15:05.143 --- 10.0.0.2 ping statistics --- 00:15:05.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.143 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:15:05.143 06:10:11 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:05.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:05.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:15:05.143 00:15:05.143 --- 10.0.0.1 ping statistics --- 00:15:05.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:05.143 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:15:05.143 06:10:11 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:05.143 06:10:11 -- nvmf/common.sh@410 -- # return 0 00:15:05.143 06:10:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:05.143 06:10:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:05.143 06:10:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:05.143 06:10:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:05.143 06:10:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:05.143 06:10:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:05.143 06:10:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:05.143 06:10:11 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:05.143 06:10:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:05.143 06:10:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:05.143 06:10:11 -- common/autotest_common.sh@10 -- # set +x 00:15:05.143 06:10:11 -- nvmf/common.sh@469 -- # nvmfpid=1112092 00:15:05.143 06:10:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:05.143 06:10:11 -- nvmf/common.sh@470 -- # waitforlisten 1112092 00:15:05.143 06:10:11 -- common/autotest_common.sh@819 -- # '[' -z 1112092 ']' 00:15:05.143 06:10:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.143 06:10:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:05.143 06:10:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.143 06:10:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:05.143 06:10:11 -- common/autotest_common.sh@10 -- # set +x 00:15:05.143 [2024-07-13 06:10:11.475169] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:05.143 [2024-07-13 06:10:11.475248] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.143 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.143 [2024-07-13 06:10:11.549086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.401 [2024-07-13 06:10:11.667030] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:05.401 [2024-07-13 06:10:11.667183] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.401 [2024-07-13 06:10:11.667202] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.401 [2024-07-13 06:10:11.667216] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.401 [2024-07-13 06:10:11.667245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.967 06:10:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:05.967 06:10:12 -- common/autotest_common.sh@852 -- # return 0 00:15:05.967 06:10:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:05.967 06:10:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:05.967 06:10:12 -- common/autotest_common.sh@10 -- # set +x 00:15:05.967 06:10:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.967 06:10:12 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:05.967 06:10:12 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:05.967 06:10:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.968 06:10:12 -- common/autotest_common.sh@10 -- # set +x 00:15:05.968 [2024-07-13 06:10:12.467732] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.968 06:10:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.968 06:10:12 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:05.968 06:10:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.968 06:10:12 -- common/autotest_common.sh@10 -- # set +x 00:15:06.226 06:10:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.226 06:10:12 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:06.226 06:10:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.226 06:10:12 -- common/autotest_common.sh@10 -- # set +x 00:15:06.226 [2024-07-13 06:10:12.483955] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:06.226 06:10:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.226 06:10:12 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:06.226 06:10:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.226 06:10:12 -- common/autotest_common.sh@10 -- # set +x 00:15:06.226 06:10:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.226 06:10:12 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:06.226 06:10:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.226 06:10:12 -- common/autotest_common.sh@10 -- # set +x 00:15:06.226 malloc0 00:15:06.226 06:10:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.226 06:10:12 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:06.226 06:10:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.226 06:10:12 -- common/autotest_common.sh@10 -- # set +x 00:15:06.226 06:10:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.226 06:10:12 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:06.226 06:10:12 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:06.226 06:10:12 -- nvmf/common.sh@520 -- # config=() 00:15:06.226 06:10:12 -- nvmf/common.sh@520 -- # local subsystem config 00:15:06.226 06:10:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:06.226 06:10:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:06.226 { 00:15:06.226 "params": { 00:15:06.226 "name": "Nvme$subsystem", 00:15:06.226 "trtype": "$TEST_TRANSPORT", 00:15:06.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:06.226 "adrfam": "ipv4", 00:15:06.226 "trsvcid": "$NVMF_PORT", 00:15:06.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:06.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:06.226 "hdgst": ${hdgst:-false}, 00:15:06.226 "ddgst": ${ddgst:-false} 00:15:06.226 }, 00:15:06.226 "method": "bdev_nvme_attach_controller" 00:15:06.226 } 00:15:06.226 EOF 00:15:06.226 )") 00:15:06.226 06:10:12 -- nvmf/common.sh@542 -- # cat 00:15:06.226 06:10:12 -- nvmf/common.sh@544 -- # jq . 00:15:06.226 06:10:12 -- nvmf/common.sh@545 -- # IFS=, 00:15:06.226 06:10:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:06.226 "params": { 00:15:06.226 "name": "Nvme1", 00:15:06.226 "trtype": "tcp", 00:15:06.226 "traddr": "10.0.0.2", 00:15:06.226 "adrfam": "ipv4", 00:15:06.226 "trsvcid": "4420", 00:15:06.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:06.226 "hdgst": false, 00:15:06.226 "ddgst": false 00:15:06.226 }, 00:15:06.226 "method": "bdev_nvme_attach_controller" 00:15:06.226 }' 00:15:06.226 [2024-07-13 06:10:12.562263] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:06.226 [2024-07-13 06:10:12.562346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1112250 ] 00:15:06.226 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.226 [2024-07-13 06:10:12.632170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.485 [2024-07-13 06:10:12.754055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.485 Running I/O for 10 seconds... 00:15:18.687 00:15:18.687 Latency(us) 00:15:18.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.687 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:18.687 Verification LBA range: start 0x0 length 0x1000 00:15:18.687 Nvme1n1 : 10.01 8680.09 67.81 0.00 0.00 14706.07 1856.85 22622.06 00:15:18.687 =================================================================================================================== 00:15:18.687 Total : 8680.09 67.81 0.00 0.00 14706.07 1856.85 22622.06 00:15:18.687 06:10:23 -- target/zcopy.sh@39 -- # perfpid=1113474 00:15:18.687 06:10:23 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:18.687 06:10:23 -- common/autotest_common.sh@10 -- # set +x 00:15:18.687 06:10:23 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:18.687 06:10:23 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:18.687 06:10:23 -- nvmf/common.sh@520 -- # config=() 00:15:18.687 06:10:23 -- nvmf/common.sh@520 -- # local subsystem config 00:15:18.687 06:10:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:18.687 06:10:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:18.687 { 00:15:18.687 "params": { 00:15:18.687 "name": "Nvme$subsystem", 00:15:18.688 "trtype": "$TEST_TRANSPORT", 00:15:18.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:18.688 "adrfam": "ipv4", 00:15:18.688 "trsvcid": "$NVMF_PORT", 00:15:18.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:18.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:18.688 "hdgst": ${hdgst:-false}, 00:15:18.688 "ddgst": ${ddgst:-false} 00:15:18.688 }, 00:15:18.688 "method": "bdev_nvme_attach_controller" 00:15:18.688 } 00:15:18.688 EOF 00:15:18.688 )") 00:15:18.688 06:10:23 -- nvmf/common.sh@542 -- # cat 00:15:18.688 [2024-07-13 06:10:23.310618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.310662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 06:10:23 -- nvmf/common.sh@544 -- # jq . 00:15:18.688 06:10:23 -- nvmf/common.sh@545 -- # IFS=, 00:15:18.688 06:10:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:18.688 "params": { 00:15:18.688 "name": "Nvme1", 00:15:18.688 "trtype": "tcp", 00:15:18.688 "traddr": "10.0.0.2", 00:15:18.688 "adrfam": "ipv4", 00:15:18.688 "trsvcid": "4420", 00:15:18.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:18.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:18.688 "hdgst": false, 00:15:18.688 "ddgst": false 00:15:18.688 }, 00:15:18.688 "method": "bdev_nvme_attach_controller" 00:15:18.688 }' 00:15:18.688 [2024-07-13 06:10:23.318578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.318606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.326600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.326625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.334612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.334634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.342631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.342653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.345555] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:18.688 [2024-07-13 06:10:23.345625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1113474 ] 00:15:18.688 [2024-07-13 06:10:23.350650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.350669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.358672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.358691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.366692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.366712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.688 [2024-07-13 06:10:23.374720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.374740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.382737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.382757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.390773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.390799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.398796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.398820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.406804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.406825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.407804] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.688 [2024-07-13 06:10:23.414889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.414938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.422909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.422959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.430897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.430936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.438933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.438955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.446948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.446971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.454964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.454994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.462984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.463006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.471008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.471033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.479045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.479080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.487042] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.487064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.495060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.495081] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.503083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.503104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.511103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.511124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.519136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.519175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.525483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.688 [2024-07-13 06:10:23.527172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.527193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.535189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.535210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.543259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.543296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.551271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.551310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.559304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.559343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.567316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.567355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.575339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.575378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.583359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.583396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.591359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.591383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.599404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.599442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.607423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.607462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.615421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.615446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.623442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.623466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.631476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.631505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.639500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.639527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.647520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.647547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.655546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.655569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.663565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.663591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.671587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.671613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.679607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.688 [2024-07-13 06:10:23.679631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.688 [2024-07-13 06:10:23.687631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.687656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.695654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.695677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.703678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.703701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.711706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.711733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.719723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.719747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.727747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.727772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.735770] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.735794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.743792] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.743816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.751826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.751849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.759840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.759875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.767860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.767892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.775893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.775931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.783914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.783960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.791960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.791982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.799962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.799984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.807974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.808014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.815996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.816019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 Running I/O for 5 seconds... 00:15:18.689 [2024-07-13 06:10:23.824019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.824039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.838633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.838662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.849078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.849105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.859893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.859920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.870555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.870582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.881173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.881201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.891574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.891601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.904290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.904317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.915782] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.915809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.924185] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.924213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.936966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.936999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.947468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.947494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.958012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.958038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.970580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.970606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.979978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.980004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:23.991234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:23.991264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.002121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.002151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.012826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.012856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.025593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.025623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.036008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.036038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.047163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.047193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.059754] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.059784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.069196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.069225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.080570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.080600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.091210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.091241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.102156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.102186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.114376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.114406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.124568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.124597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.135613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.135643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.145935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.145973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.156535] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.156565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.167557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.167586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.178215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.178244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.189229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.189259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.199974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.200003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.210956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.210986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.221855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.221894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.232679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.232708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.243441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.243471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.256108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.256138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.267855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.689 [2024-07-13 06:10:24.267893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.689 [2024-07-13 06:10:24.277234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.277263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.288841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.288878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.300987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.301016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.310068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.310098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.323229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.323258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.333676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.333705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.344691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.344721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.357896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.357947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.368320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.368349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.379787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.379816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.390909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.390941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.401226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.401256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.412532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.412561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.423597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.423627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.436659] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.436689] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.446745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.446775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.457598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.457628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.468982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.469013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.479949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.479979] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.491233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.491263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.502071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.502101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.512912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.512941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.523853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.523902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.536857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.536896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.547109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.547140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.558068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.558099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.571040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.571080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.581043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.581072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.592441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.592470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.603183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.603213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.613815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.613845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.624832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.624861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.635777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.635807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.648653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.648683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.659186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.659216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.669524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.669554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.680553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.680583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.693252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.693282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.703367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.703396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.714322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.714351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.725452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.725482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.736773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.736803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.747939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.747969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.760318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.760348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.770170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.770200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.781345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.781382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.794350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.794380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.804641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.804671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.815527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.815557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.828204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.828235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.838337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.838366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.848784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.848814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.859338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.859368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.870286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.870316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.882716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.882745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.892720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.892749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.903570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.903600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.917071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.917101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.690 [2024-07-13 06:10:24.927743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.690 [2024-07-13 06:10:24.927772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:24.938338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:24.938368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:24.949130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:24.949159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:24.960059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:24.960089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:24.971177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:24.971207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:24.982265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:24.982295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:24.992825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:24.992854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:25.003862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:25.003899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:25.016766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:25.016795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:25.026553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:25.026582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:25.037681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:25.037710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:25.048635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:25.048664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:25.059174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:25.059204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:25.072161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:25.072191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:25.082473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:25.082502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:25.093391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:25.093421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:25.105875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:25.105904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:25.116017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:25.116046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:25.127321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:25.127351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:25.137978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:25.138007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:25.149123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:25.149153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:25.159677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:25.159706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:25.170182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:25.170212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:25.180920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:25.180949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.691 [2024-07-13 06:10:25.191251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.691 [2024-07-13 06:10:25.191280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.949 [2024-07-13 06:10:25.202688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.949 [2024-07-13 06:10:25.202719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.949 [2024-07-13 06:10:25.213744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.949 [2024-07-13 06:10:25.213775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.949 [2024-07-13 06:10:25.224569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.949 [2024-07-13 06:10:25.224598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.949 [2024-07-13 06:10:25.235434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.949 [2024-07-13 06:10:25.235463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.949 [2024-07-13 06:10:25.246292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.949 [2024-07-13 06:10:25.246322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.949 [2024-07-13 06:10:25.257159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.949 [2024-07-13 06:10:25.257189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.949 [2024-07-13 06:10:25.269765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.949 [2024-07-13 06:10:25.269795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.949 [2024-07-13 06:10:25.279904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.949 [2024-07-13 06:10:25.279933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.949 [2024-07-13 06:10:25.291383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.949 [2024-07-13 06:10:25.291413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.949 [2024-07-13 06:10:25.301925] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.949 [2024-07-13 06:10:25.301955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.949 [2024-07-13 06:10:25.312491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.949 [2024-07-13 06:10:25.312521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.949 [2024-07-13 06:10:25.323378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.949 [2024-07-13 06:10:25.323408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.950 [2024-07-13 06:10:25.334158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.950 [2024-07-13 06:10:25.334187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.950 [2024-07-13 06:10:25.345016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.950 [2024-07-13 06:10:25.345045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.950 [2024-07-13 06:10:25.356447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.950 [2024-07-13 06:10:25.356478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.950 [2024-07-13 06:10:25.367454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.950 [2024-07-13 06:10:25.367485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.950 [2024-07-13 06:10:25.378510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.950 [2024-07-13 06:10:25.378540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.950 [2024-07-13 06:10:25.389579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.950 [2024-07-13 06:10:25.389609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.950 [2024-07-13 06:10:25.400121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.950 [2024-07-13 06:10:25.400151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.950 [2024-07-13 06:10:25.411192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.950 [2024-07-13 06:10:25.411222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.950 [2024-07-13 06:10:25.422152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.950 [2024-07-13 06:10:25.422182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.950 [2024-07-13 06:10:25.433081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.950 [2024-07-13 06:10:25.433111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.950 [2024-07-13 06:10:25.445719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.950 [2024-07-13 06:10:25.445749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:18.950 [2024-07-13 06:10:25.456056] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:18.950 [2024-07-13 06:10:25.456088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.467495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.467525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.478329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.478359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.489281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.489310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.501690] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.501720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.511683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.511713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.522968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.522997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.533718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.533748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.544176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.544206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.555285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.555315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.566678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.566708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.579148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.579178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.588918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.588947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.599463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.599493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.612251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.612290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.622554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.622583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.632729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.632758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.643505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.643535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.656279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.656309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.665979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.666009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.676965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.676995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.687942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.687972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.699015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.699045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.208 [2024-07-13 06:10:25.709899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.208 [2024-07-13 06:10:25.709934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.720783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.720813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.733587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.733618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.743853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.743901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.755331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.755361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.766233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.766263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.776906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.776942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.789520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.789549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.799045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.799075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.810650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.810680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.823993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.824030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.834085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.834114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.845278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.845308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.857673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.857703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.867842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.867881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.878398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.878428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.889357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.889386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.899947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.899977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.910907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.910936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.921844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.921883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.932742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.932771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.943738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.943767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.956699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.956728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.466 [2024-07-13 06:10:25.967255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.466 [2024-07-13 06:10:25.967285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:25.978073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:25.978106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:25.990899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:25.990929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.001069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.001099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.011482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.011512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.022638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.022667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.033564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.033602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.046299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.046329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.056502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.056532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.067213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.067243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.079716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.079745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.089460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.089489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.100157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.100186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.110957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.110987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.123699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.123728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.133033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.133063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.144216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.144246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.154784] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.154813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.165810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.165840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.178177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.178206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.188102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.188132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.199081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.199110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.209962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.209992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.222654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.222683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.724 [2024-07-13 06:10:26.232102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.724 [2024-07-13 06:10:26.232132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.982 [2024-07-13 06:10:26.244497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.982 [2024-07-13 06:10:26.244535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.982 [2024-07-13 06:10:26.255347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.982 [2024-07-13 06:10:26.255376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.982 [2024-07-13 06:10:26.268531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.982 [2024-07-13 06:10:26.268561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.982 [2024-07-13 06:10:26.279287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.982 [2024-07-13 06:10:26.279316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.982 [2024-07-13 06:10:26.290078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.982 [2024-07-13 06:10:26.290107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.982 [2024-07-13 06:10:26.302966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.982 [2024-07-13 06:10:26.302996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.982 [2024-07-13 06:10:26.312961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.982 [2024-07-13 06:10:26.312990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.982 [2024-07-13 06:10:26.323851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.982 [2024-07-13 06:10:26.323889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.982 [2024-07-13 06:10:26.334717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.982 [2024-07-13 06:10:26.334746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.982 [2024-07-13 06:10:26.345563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.982 [2024-07-13 06:10:26.345593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.983 [2024-07-13 06:10:26.356816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.983 [2024-07-13 06:10:26.356846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.983 [2024-07-13 06:10:26.367656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.983 [2024-07-13 06:10:26.367686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.983 [2024-07-13 06:10:26.380510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.983 [2024-07-13 06:10:26.380541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.983 [2024-07-13 06:10:26.390491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.983 [2024-07-13 06:10:26.390520] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.983 [2024-07-13 06:10:26.401563] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.983 [2024-07-13 06:10:26.401592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.983 [2024-07-13 06:10:26.412541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.983 [2024-07-13 06:10:26.412571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.983 [2024-07-13 06:10:26.423428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.983 [2024-07-13 06:10:26.423457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.983 [2024-07-13 06:10:26.434402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.983 [2024-07-13 06:10:26.434432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.983 [2024-07-13 06:10:26.445384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.983 [2024-07-13 06:10:26.445413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.983 [2024-07-13 06:10:26.458465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.983 [2024-07-13 06:10:26.458502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.983 [2024-07-13 06:10:26.468008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.983 [2024-07-13 06:10:26.468036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.983 [2024-07-13 06:10:26.479549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.983 [2024-07-13 06:10:26.479578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:19.983 [2024-07-13 06:10:26.490359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:19.983 [2024-07-13 06:10:26.490389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.241 [2024-07-13 06:10:26.501253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.241 [2024-07-13 06:10:26.501283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.241 [2024-07-13 06:10:26.512342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.241 [2024-07-13 06:10:26.512371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.241 [2024-07-13 06:10:26.523427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.241 [2024-07-13 06:10:26.523456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.241 [2024-07-13 06:10:26.534399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.241 [2024-07-13 06:10:26.534428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.241 [2024-07-13 06:10:26.547445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.241 [2024-07-13 06:10:26.547475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.241 [2024-07-13 06:10:26.557542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.241 [2024-07-13 06:10:26.557572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.241 [2024-07-13 06:10:26.568520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.241 [2024-07-13 06:10:26.568550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.242 [2024-07-13 06:10:26.579378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.242 [2024-07-13 06:10:26.579407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.242 [2024-07-13 06:10:26.590053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.242 [2024-07-13 06:10:26.590083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.242 [2024-07-13 06:10:26.603040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.242 [2024-07-13 06:10:26.603070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.242 [2024-07-13 06:10:26.613144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.242 [2024-07-13 06:10:26.613173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.242 [2024-07-13 06:10:26.624351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.242 [2024-07-13 06:10:26.624380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.242 [2024-07-13 06:10:26.635289] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.242 [2024-07-13 06:10:26.635319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.242 [2024-07-13 06:10:26.646482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.242 [2024-07-13 06:10:26.646512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.242 [2024-07-13 06:10:26.657409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.242 [2024-07-13 06:10:26.657439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.242 [2024-07-13 06:10:26.670278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.242 [2024-07-13 06:10:26.670307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.242 [2024-07-13 06:10:26.680305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.242 [2024-07-13 06:10:26.680334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.242 [2024-07-13 06:10:26.691654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.242 [2024-07-13 06:10:26.691683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.242 [2024-07-13 06:10:26.702526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.242 [2024-07-13 06:10:26.702555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.242 [2024-07-13 06:10:26.713457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.242 [2024-07-13 06:10:26.713487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.242 [2024-07-13 06:10:26.726158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.242 [2024-07-13 06:10:26.726188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.242 [2024-07-13 06:10:26.735936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.242 [2024-07-13 06:10:26.735966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.242 [2024-07-13 06:10:26.747043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.242 [2024-07-13 06:10:26.747073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.499 [2024-07-13 06:10:26.758681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.499 [2024-07-13 06:10:26.758712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.499 [2024-07-13 06:10:26.769665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.499 [2024-07-13 06:10:26.769695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.499 [2024-07-13 06:10:26.782584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.499 [2024-07-13 06:10:26.782614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.499 [2024-07-13 06:10:26.792241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.499 [2024-07-13 06:10:26.792271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.499 [2024-07-13 06:10:26.803258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.499 [2024-07-13 06:10:26.803288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.499 [2024-07-13 06:10:26.814005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.499 [2024-07-13 06:10:26.814035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.499 [2024-07-13 06:10:26.825013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.499 [2024-07-13 06:10:26.825043] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.499 [2024-07-13 06:10:26.836531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.499 [2024-07-13 06:10:26.836560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.499 [2024-07-13 06:10:26.848060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.499 [2024-07-13 06:10:26.848089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.499 [2024-07-13 06:10:26.859348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.499 [2024-07-13 06:10:26.859378] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.499 [2024-07-13 06:10:26.872389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.499 [2024-07-13 06:10:26.872419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.499 [2024-07-13 06:10:26.882325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.499 [2024-07-13 06:10:26.882364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.499 [2024-07-13 06:10:26.893937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.499 [2024-07-13 06:10:26.893967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.499 [2024-07-13 06:10:26.904845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.499 [2024-07-13 06:10:26.904885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.499 [2024-07-13 06:10:26.916077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.499 [2024-07-13 06:10:26.916107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.499 [2024-07-13 06:10:26.929037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.499 [2024-07-13 06:10:26.929067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.499 [2024-07-13 06:10:26.939358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.499 [2024-07-13 06:10:26.939387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.500 [2024-07-13 06:10:26.950439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.500 [2024-07-13 06:10:26.950469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.500 [2024-07-13 06:10:26.962892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.500 [2024-07-13 06:10:26.962930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.500 [2024-07-13 06:10:26.972766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.500 [2024-07-13 06:10:26.972795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.500 [2024-07-13 06:10:26.983480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.500 [2024-07-13 06:10:26.983510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.500 [2024-07-13 06:10:26.994312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.500 [2024-07-13 06:10:26.994341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.500 [2024-07-13 06:10:27.005150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.500 [2024-07-13 06:10:27.005179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.016569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.016599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.027663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.027693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.038629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.038659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.049783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.049813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.060573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.060602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.073183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.073213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.082989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.083018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.093790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.093819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.104613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.104643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.115396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.115425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.126203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.126233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.136873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.136902] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.149639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.149669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.160082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.160111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.170753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.170782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.184519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.184550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.194591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.194620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.205266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.205295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.215707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.215737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.226498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.226527] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.237470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.237500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.248584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.248613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:20.757 [2024-07-13 06:10:27.259314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:20.757 [2024-07-13 06:10:27.259343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.270472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.270503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.281576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.281606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.292325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.292363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.302834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.302863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.315423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.315452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.325509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.325538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.337337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.337366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.348027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.348056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.361040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.361070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.371196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.371225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.381589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.381620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.392629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.392660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.403901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.403939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.415058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.415088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.427846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.427885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.437486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.437516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.448953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.448983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.460172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.460202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.471224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.471253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.481711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.481740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.492705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.492734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.503685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.503723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.014 [2024-07-13 06:10:27.516628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.014 [2024-07-13 06:10:27.516658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.527260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.527290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.539210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.539240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.550184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.550213] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.562965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.562995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.573150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.573180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.584139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.584168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.596578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.596608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.606097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.606127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.618322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.618352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.629107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.629136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.640180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.640210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.653450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.653479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.664099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.664128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.674960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.674990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.688021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.688051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.698593] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.698622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.709837] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.709874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.722933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.722971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.732791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.732820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.744443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.744473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.754992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.755021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.766003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.766032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.271 [2024-07-13 06:10:27.779334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.271 [2024-07-13 06:10:27.779365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.790166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.790197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.800715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.800744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.811654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.811684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.822992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.823021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.834443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.834473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.847431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.847460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.857625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.857655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.868315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.868345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.879205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.879234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.890303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.890333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.901066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.901097] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.912369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.912399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.923432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.923462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.934747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.934786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.946345] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.946375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.957425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.957455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.968848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.968899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.979691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.979721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:27.990553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:27.990583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:28.001667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:28.001706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:28.014363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:28.014394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:28.024392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:28.024422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.528 [2024-07-13 06:10:28.035740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.528 [2024-07-13 06:10:28.035771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.785 [2024-07-13 06:10:28.046949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.785 [2024-07-13 06:10:28.046981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.785 [2024-07-13 06:10:28.058019] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.785 [2024-07-13 06:10:28.058049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.785 [2024-07-13 06:10:28.069136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.785 [2024-07-13 06:10:28.069165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.785 [2024-07-13 06:10:28.080195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.785 [2024-07-13 06:10:28.080225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.785 [2024-07-13 06:10:28.091467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.785 [2024-07-13 06:10:28.091496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.785 [2024-07-13 06:10:28.102396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.785 [2024-07-13 06:10:28.102426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.785 [2024-07-13 06:10:28.113532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.785 [2024-07-13 06:10:28.113562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.785 [2024-07-13 06:10:28.124049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.785 [2024-07-13 06:10:28.124079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.785 [2024-07-13 06:10:28.134813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.785 [2024-07-13 06:10:28.134844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.785 [2024-07-13 06:10:28.145559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.785 [2024-07-13 06:10:28.145600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.785 [2024-07-13 06:10:28.158237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.785 [2024-07-13 06:10:28.158266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.786 [2024-07-13 06:10:28.167578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.786 [2024-07-13 06:10:28.167607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.786 [2024-07-13 06:10:28.178123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.786 [2024-07-13 06:10:28.178151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.786 [2024-07-13 06:10:28.191533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.786 [2024-07-13 06:10:28.191564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.786 [2024-07-13 06:10:28.202107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.786 [2024-07-13 06:10:28.202137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.786 [2024-07-13 06:10:28.212997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.786 [2024-07-13 06:10:28.213027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.786 [2024-07-13 06:10:28.223758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.786 [2024-07-13 06:10:28.223788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.786 [2024-07-13 06:10:28.234918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.786 [2024-07-13 06:10:28.234947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.786 [2024-07-13 06:10:28.246085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.786 [2024-07-13 06:10:28.246115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.786 [2024-07-13 06:10:28.257084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.786 [2024-07-13 06:10:28.257113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.786 [2024-07-13 06:10:28.269875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.786 [2024-07-13 06:10:28.269904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.786 [2024-07-13 06:10:28.280032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.786 [2024-07-13 06:10:28.280062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:21.786 [2024-07-13 06:10:28.291244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:21.786 [2024-07-13 06:10:28.291274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.304457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.304487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.314831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.314861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.325200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.325230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.335880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.335921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.346616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.346645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.359522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.359553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.369450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.369479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.380093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.380122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.393154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.393184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.402980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.403011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.414342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.414371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.425492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.425522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.436533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.436563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.449719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.449748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.459154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.459183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.470813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.470843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.481307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.481337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.492059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.492089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.504573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.504603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.514821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.514850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.525445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.525475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.536343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.536373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.044 [2024-07-13 06:10:28.546907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.044 [2024-07-13 06:10:28.546937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.557808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.557839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.568420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.568449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.581126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.581156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.591803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.591833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.602463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.602493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.615401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.615431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.624290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.624319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.637394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.637424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.647886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.647916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.658420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.658449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.669704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.669733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.680374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.680403] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.693217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.693247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.704085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.704114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.715142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.715171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.727408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.727437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.737237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.737268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.748307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.748337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.758778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.758807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.769676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.769706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.782335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.782365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.792985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.793015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.303 [2024-07-13 06:10:28.803667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.303 [2024-07-13 06:10:28.803697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.561 [2024-07-13 06:10:28.816825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.561 [2024-07-13 06:10:28.816855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.561 [2024-07-13 06:10:28.826519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.561 [2024-07-13 06:10:28.826549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.561 [2024-07-13 06:10:28.837733] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.561 [2024-07-13 06:10:28.837763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.561 [2024-07-13 06:10:28.842960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.561 [2024-07-13 06:10:28.842989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.561 00:15:22.561 Latency(us) 00:15:22.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.561 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:22.561 Nvme1n1 : 5.01 11682.17 91.27 0.00 0.00 10942.12 4903.06 23010.42 00:15:22.561 =================================================================================================================== 00:15:22.561 Total : 11682.17 91.27 0.00 0.00 10942.12 4903.06 23010.42 00:15:22.561 [2024-07-13 06:10:28.849879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.561 [2024-07-13 06:10:28.849906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.561 [2024-07-13 06:10:28.857896] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.561 [2024-07-13 06:10:28.857924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.561 [2024-07-13 06:10:28.865909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.561 [2024-07-13 06:10:28.865934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.561 [2024-07-13 06:10:28.873975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.561 [2024-07-13 06:10:28.874024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.561 [2024-07-13 06:10:28.881994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.561 [2024-07-13 06:10:28.882042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:28.890013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:28.890061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:28.898036] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:28.898083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:28.906053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:28.906101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:28.914082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:28.914143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:28.922105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:28.922150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:28.930128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:28.930174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:28.938153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:28.938199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:28.946175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:28.946222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:28.954200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:28.954248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:28.962224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:28.962274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:28.970242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:28.970288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:28.978264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:28.978308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:28.986288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:28.986332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:28.994287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:28.994324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:29.002291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:29.002315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:29.010312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:29.010337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:29.018335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:29.018359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:29.026356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:29.026381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:29.034408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:29.034446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:29.042449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:29.042491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:29.050461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:29.050501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:29.058443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:29.058468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.562 [2024-07-13 06:10:29.066462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.562 [2024-07-13 06:10:29.066498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.821 [2024-07-13 06:10:29.074497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.821 [2024-07-13 06:10:29.074526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.821 [2024-07-13 06:10:29.082516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.821 [2024-07-13 06:10:29.082543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.821 [2024-07-13 06:10:29.090551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.821 [2024-07-13 06:10:29.090581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.821 [2024-07-13 06:10:29.098603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.821 [2024-07-13 06:10:29.098647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.821 [2024-07-13 06:10:29.106626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.821 [2024-07-13 06:10:29.106672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.821 [2024-07-13 06:10:29.114602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.821 [2024-07-13 06:10:29.114626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.821 [2024-07-13 06:10:29.122625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.821 [2024-07-13 06:10:29.122648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.821 [2024-07-13 06:10:29.130647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:22.821 [2024-07-13 06:10:29.130671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:22.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1113474) - No such process 00:15:22.821 06:10:29 -- target/zcopy.sh@49 -- # wait 1113474 00:15:22.821 06:10:29 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:22.821 06:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.821 06:10:29 -- common/autotest_common.sh@10 -- # set +x 00:15:22.821 06:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.821 06:10:29 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:22.821 06:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.821 06:10:29 -- common/autotest_common.sh@10 -- # set +x 00:15:22.821 delay0 00:15:22.821 06:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.821 06:10:29 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:22.821 06:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:22.821 06:10:29 -- common/autotest_common.sh@10 -- # set +x 00:15:22.821 06:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:22.821 06:10:29 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:22.821 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.821 [2024-07-13 06:10:29.247130] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:30.927 Initializing NVMe Controllers 00:15:30.927 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:30.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:30.927 Initialization complete. Launching workers. 00:15:30.927 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 270, failed: 15717 00:15:30.927 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 15889, failed to submit 98 00:15:30.927 success 15765, unsuccess 124, failed 0 00:15:30.927 06:10:36 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:30.927 06:10:36 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:30.927 06:10:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:30.927 06:10:36 -- nvmf/common.sh@116 -- # sync 00:15:30.927 06:10:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:30.927 06:10:36 -- nvmf/common.sh@119 -- # set +e 00:15:30.927 06:10:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:30.927 06:10:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:30.927 rmmod nvme_tcp 00:15:30.928 rmmod nvme_fabrics 00:15:30.928 rmmod nvme_keyring 00:15:30.928 06:10:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:30.928 06:10:36 -- nvmf/common.sh@123 -- # set -e 00:15:30.928 06:10:36 -- nvmf/common.sh@124 -- # return 0 00:15:30.928 06:10:36 -- nvmf/common.sh@477 -- # '[' -n 1112092 ']' 00:15:30.928 06:10:36 -- nvmf/common.sh@478 -- # killprocess 1112092 00:15:30.928 06:10:36 -- common/autotest_common.sh@926 -- # '[' -z 1112092 ']' 00:15:30.928 06:10:36 -- common/autotest_common.sh@930 -- # kill -0 1112092 00:15:30.928 06:10:36 -- common/autotest_common.sh@931 -- # uname 00:15:30.928 06:10:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:30.928 06:10:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1112092 00:15:30.928 06:10:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:30.928 06:10:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:30.928 06:10:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1112092' 00:15:30.928 killing process with pid 1112092 00:15:30.928 06:10:36 -- common/autotest_common.sh@945 -- # kill 1112092 00:15:30.928 06:10:36 -- common/autotest_common.sh@950 -- # wait 1112092 00:15:30.928 06:10:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:30.928 06:10:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:30.928 06:10:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:30.928 06:10:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:30.928 06:10:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:30.928 06:10:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.928 06:10:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.928 06:10:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.306 06:10:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:32.306 00:15:32.306 real 0m29.500s 00:15:32.306 user 0m42.891s 00:15:32.306 sys 0m9.301s 00:15:32.306 06:10:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:32.306 06:10:38 -- common/autotest_common.sh@10 -- # set +x 00:15:32.306 ************************************ 00:15:32.306 END TEST nvmf_zcopy 00:15:32.306 ************************************ 00:15:32.306 06:10:38 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:32.306 06:10:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:32.306 06:10:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:32.306 06:10:38 -- common/autotest_common.sh@10 -- # set +x 00:15:32.306 ************************************ 00:15:32.306 START TEST nvmf_nmic 00:15:32.306 ************************************ 00:15:32.306 06:10:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:32.565 * Looking for test storage... 00:15:32.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:32.565 06:10:38 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.565 06:10:38 -- nvmf/common.sh@7 -- # uname -s 00:15:32.565 06:10:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.565 06:10:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.565 06:10:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.565 06:10:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.565 06:10:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.565 06:10:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.565 06:10:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.565 06:10:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.565 06:10:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.565 06:10:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.565 06:10:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:32.565 06:10:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:32.565 06:10:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.565 06:10:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.565 06:10:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.565 06:10:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.565 06:10:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.565 06:10:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.565 06:10:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.565 06:10:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.565 06:10:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.565 06:10:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.565 06:10:38 -- paths/export.sh@5 -- # export PATH 00:15:32.565 06:10:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.565 06:10:38 -- nvmf/common.sh@46 -- # : 0 00:15:32.565 06:10:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:32.565 06:10:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:32.565 06:10:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:32.565 06:10:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.565 06:10:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.565 06:10:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:32.565 06:10:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:32.565 06:10:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:32.565 06:10:38 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:32.565 06:10:38 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:32.565 06:10:38 -- target/nmic.sh@14 -- # nvmftestinit 00:15:32.565 06:10:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:32.565 06:10:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.565 06:10:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:32.565 06:10:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:32.565 06:10:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:32.565 06:10:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.565 06:10:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:32.565 06:10:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.565 06:10:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:32.565 06:10:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:32.565 06:10:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:32.565 06:10:38 -- common/autotest_common.sh@10 -- # set +x 00:15:34.467 06:10:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:34.467 06:10:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:34.467 06:10:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:34.467 06:10:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:34.467 06:10:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:34.467 06:10:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:34.467 06:10:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:34.467 06:10:40 -- nvmf/common.sh@294 -- # net_devs=() 00:15:34.467 06:10:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:34.467 06:10:40 -- nvmf/common.sh@295 -- # e810=() 00:15:34.467 06:10:40 -- nvmf/common.sh@295 -- # local -ga e810 00:15:34.467 06:10:40 -- nvmf/common.sh@296 -- # x722=() 00:15:34.467 06:10:40 -- nvmf/common.sh@296 -- # local -ga x722 00:15:34.467 06:10:40 -- nvmf/common.sh@297 -- # mlx=() 00:15:34.467 06:10:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:34.467 06:10:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:34.467 06:10:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:34.467 06:10:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:34.467 06:10:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:34.467 06:10:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:34.467 06:10:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:34.467 06:10:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:34.467 06:10:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:34.467 06:10:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:34.467 06:10:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:34.467 06:10:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:34.467 06:10:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:34.467 06:10:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:34.467 06:10:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:34.467 06:10:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:34.467 06:10:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:34.467 06:10:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:34.467 06:10:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:34.467 06:10:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:34.467 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:34.467 06:10:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:34.467 06:10:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:34.467 06:10:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.467 06:10:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.467 06:10:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:34.467 06:10:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:34.467 06:10:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:34.467 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:34.467 06:10:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:34.467 06:10:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:34.467 06:10:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:34.467 06:10:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:34.467 06:10:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:34.467 06:10:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:34.467 06:10:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:34.467 06:10:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:34.467 06:10:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:34.467 06:10:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.467 06:10:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:34.467 06:10:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.467 06:10:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:34.467 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:34.467 06:10:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.467 06:10:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:34.467 06:10:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:34.467 06:10:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:34.467 06:10:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:34.467 06:10:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:34.467 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:34.467 06:10:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:34.467 06:10:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:34.467 06:10:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:34.467 06:10:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:34.467 06:10:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:34.467 06:10:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:34.467 06:10:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:34.467 06:10:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:34.467 06:10:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:34.467 06:10:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:34.467 06:10:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:34.467 06:10:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:34.467 06:10:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:34.467 06:10:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:34.467 06:10:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:34.467 06:10:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:34.467 06:10:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:34.467 06:10:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:34.467 06:10:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:34.467 06:10:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:34.467 06:10:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:34.467 06:10:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:34.467 06:10:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:34.467 06:10:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:34.467 06:10:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:34.467 06:10:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:34.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:34.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:15:34.725 00:15:34.725 --- 10.0.0.2 ping statistics --- 00:15:34.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.725 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:15:34.725 06:10:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:34.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:34.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:15:34.725 00:15:34.725 --- 10.0.0.1 ping statistics --- 00:15:34.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:34.725 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:15:34.725 06:10:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:34.725 06:10:40 -- nvmf/common.sh@410 -- # return 0 00:15:34.725 06:10:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:34.725 06:10:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:34.725 06:10:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:34.725 06:10:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:34.725 06:10:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:34.725 06:10:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:34.725 06:10:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:34.725 06:10:41 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:34.725 06:10:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:34.725 06:10:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:34.725 06:10:41 -- common/autotest_common.sh@10 -- # set +x 00:15:34.725 06:10:41 -- nvmf/common.sh@469 -- # nvmfpid=1117047 00:15:34.725 06:10:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:34.725 06:10:41 -- nvmf/common.sh@470 -- # waitforlisten 1117047 00:15:34.725 06:10:41 -- common/autotest_common.sh@819 -- # '[' -z 1117047 ']' 00:15:34.725 06:10:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.725 06:10:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:34.725 06:10:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.725 06:10:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:34.725 06:10:41 -- common/autotest_common.sh@10 -- # set +x 00:15:34.725 [2024-07-13 06:10:41.055836] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:34.725 [2024-07-13 06:10:41.055922] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.725 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.725 [2024-07-13 06:10:41.125856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:34.983 [2024-07-13 06:10:41.246417] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:34.983 [2024-07-13 06:10:41.246585] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.983 [2024-07-13 06:10:41.246605] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.983 [2024-07-13 06:10:41.246619] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.983 [2024-07-13 06:10:41.246721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.983 [2024-07-13 06:10:41.246768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.983 [2024-07-13 06:10:41.246820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.983 [2024-07-13 06:10:41.246823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.548 06:10:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:35.548 06:10:41 -- common/autotest_common.sh@852 -- # return 0 00:15:35.548 06:10:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:35.548 06:10:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:35.548 06:10:41 -- common/autotest_common.sh@10 -- # set +x 00:15:35.548 06:10:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.548 06:10:41 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:35.548 06:10:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.548 06:10:41 -- common/autotest_common.sh@10 -- # set +x 00:15:35.548 [2024-07-13 06:10:42.003255] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.548 06:10:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.548 06:10:42 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:35.548 06:10:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.548 06:10:42 -- common/autotest_common.sh@10 -- # set +x 00:15:35.548 Malloc0 00:15:35.548 06:10:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.548 06:10:42 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:35.548 06:10:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.548 06:10:42 -- common/autotest_common.sh@10 -- # set +x 00:15:35.548 06:10:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.548 06:10:42 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:35.548 06:10:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.548 06:10:42 -- common/autotest_common.sh@10 -- # set +x 00:15:35.548 06:10:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.548 06:10:42 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.548 06:10:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.548 06:10:42 -- common/autotest_common.sh@10 -- # set +x 00:15:35.548 [2024-07-13 06:10:42.054485] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.548 06:10:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.548 06:10:42 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:35.548 test case1: single bdev can't be used in multiple subsystems 00:15:35.548 06:10:42 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:35.548 06:10:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.548 06:10:42 -- common/autotest_common.sh@10 -- # set +x 00:15:35.805 06:10:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.805 06:10:42 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:35.805 06:10:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.805 06:10:42 -- common/autotest_common.sh@10 -- # set +x 00:15:35.805 06:10:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.805 06:10:42 -- target/nmic.sh@28 -- # nmic_status=0 00:15:35.805 06:10:42 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:35.805 06:10:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.805 06:10:42 -- common/autotest_common.sh@10 -- # set +x 00:15:35.805 [2024-07-13 06:10:42.078369] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:35.805 [2024-07-13 06:10:42.078405] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:35.805 [2024-07-13 06:10:42.078435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:35.805 request: 00:15:35.805 { 00:15:35.805 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:35.805 "namespace": { 00:15:35.805 "bdev_name": "Malloc0" 00:15:35.805 }, 00:15:35.805 "method": "nvmf_subsystem_add_ns", 00:15:35.805 "req_id": 1 00:15:35.805 } 00:15:35.805 Got JSON-RPC error response 00:15:35.805 response: 00:15:35.805 { 00:15:35.805 "code": -32602, 00:15:35.805 "message": "Invalid parameters" 00:15:35.805 } 00:15:35.805 06:10:42 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:35.805 06:10:42 -- target/nmic.sh@29 -- # nmic_status=1 00:15:35.805 06:10:42 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:35.805 06:10:42 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:35.805 Adding namespace failed - expected result. 00:15:35.805 06:10:42 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:35.805 test case2: host connect to nvmf target in multiple paths 00:15:35.805 06:10:42 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:35.805 06:10:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:35.805 06:10:42 -- common/autotest_common.sh@10 -- # set +x 00:15:35.805 [2024-07-13 06:10:42.086493] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:35.805 06:10:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:35.805 06:10:42 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:36.370 06:10:42 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:36.933 06:10:43 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:36.933 06:10:43 -- common/autotest_common.sh@1177 -- # local i=0 00:15:36.933 06:10:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:36.933 06:10:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:36.933 06:10:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:38.827 06:10:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:38.827 06:10:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:38.827 06:10:45 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:38.827 06:10:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:38.827 06:10:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:38.827 06:10:45 -- common/autotest_common.sh@1187 -- # return 0 00:15:38.827 06:10:45 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:38.827 [global] 00:15:38.827 thread=1 00:15:38.827 invalidate=1 00:15:38.827 rw=write 00:15:38.827 time_based=1 00:15:38.827 runtime=1 00:15:38.827 ioengine=libaio 00:15:38.827 direct=1 00:15:38.827 bs=4096 00:15:38.827 iodepth=1 00:15:38.827 norandommap=0 00:15:38.827 numjobs=1 00:15:38.827 00:15:39.084 verify_dump=1 00:15:39.084 verify_backlog=512 00:15:39.084 verify_state_save=0 00:15:39.084 do_verify=1 00:15:39.084 verify=crc32c-intel 00:15:39.084 [job0] 00:15:39.084 filename=/dev/nvme0n1 00:15:39.084 Could not set queue depth (nvme0n1) 00:15:39.084 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:39.084 fio-3.35 00:15:39.084 Starting 1 thread 00:15:40.454 00:15:40.454 job0: (groupid=0, jobs=1): err= 0: pid=1117702: Sat Jul 13 06:10:46 2024 00:15:40.454 read: IOPS=661, BW=2646KiB/s (2710kB/s)(2736KiB/1034msec) 00:15:40.454 slat (nsec): min=7632, max=48014, avg=16003.66, stdev=3647.53 00:15:40.454 clat (usec): min=245, max=41023, avg=1035.79, stdev=5338.87 00:15:40.454 lat (usec): min=253, max=41038, avg=1051.79, stdev=5339.07 00:15:40.454 clat percentiles (usec): 00:15:40.454 | 1.00th=[ 253], 5.00th=[ 269], 10.00th=[ 281], 20.00th=[ 285], 00:15:40.454 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 302], 00:15:40.454 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 482], 95.00th=[ 510], 00:15:40.454 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:40.454 | 99.99th=[41157] 00:15:40.454 write: IOPS=990, BW=3961KiB/s (4056kB/s)(4096KiB/1034msec); 0 zone resets 00:15:40.454 slat (usec): min=9, max=40601, avg=87.98, stdev=1555.78 00:15:40.454 clat (usec): min=164, max=506, avg=210.79, stdev=22.48 00:15:40.454 lat (usec): min=180, max=40844, avg=298.77, stdev=1559.70 00:15:40.454 clat percentiles (usec): 00:15:40.454 | 1.00th=[ 174], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 196], 00:15:40.454 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:15:40.454 | 70.00th=[ 219], 80.00th=[ 223], 90.00th=[ 231], 95.00th=[ 239], 00:15:40.454 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 371], 99.95th=[ 506], 00:15:40.454 | 99.99th=[ 506] 00:15:40.454 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:15:40.454 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:40.454 lat (usec) : 250=58.37%, 500=38.82%, 750=2.11% 00:15:40.454 lat (msec) : 50=0.70% 00:15:40.454 cpu : usr=1.84%, sys=4.45%, ctx=1711, majf=0, minf=2 00:15:40.454 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:40.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.454 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:40.454 issued rwts: total=684,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:40.454 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:40.454 00:15:40.454 Run status group 0 (all jobs): 00:15:40.454 READ: bw=2646KiB/s (2710kB/s), 2646KiB/s-2646KiB/s (2710kB/s-2710kB/s), io=2736KiB (2802kB), run=1034-1034msec 00:15:40.454 WRITE: bw=3961KiB/s (4056kB/s), 3961KiB/s-3961KiB/s (4056kB/s-4056kB/s), io=4096KiB (4194kB), run=1034-1034msec 00:15:40.454 00:15:40.454 Disk stats (read/write): 00:15:40.454 nvme0n1: ios=705/1024, merge=0/0, ticks=1512/189, in_queue=1701, util=99.70% 00:15:40.454 06:10:46 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:40.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:40.454 06:10:46 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:40.454 06:10:46 -- common/autotest_common.sh@1198 -- # local i=0 00:15:40.454 06:10:46 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:40.454 06:10:46 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.454 06:10:46 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:40.454 06:10:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:40.454 06:10:46 -- common/autotest_common.sh@1210 -- # return 0 00:15:40.454 06:10:46 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:40.454 06:10:46 -- target/nmic.sh@53 -- # nvmftestfini 00:15:40.454 06:10:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:40.455 06:10:46 -- nvmf/common.sh@116 -- # sync 00:15:40.455 06:10:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:40.455 06:10:46 -- nvmf/common.sh@119 -- # set +e 00:15:40.455 06:10:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:40.455 06:10:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:40.455 rmmod nvme_tcp 00:15:40.455 rmmod nvme_fabrics 00:15:40.455 rmmod nvme_keyring 00:15:40.455 06:10:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:40.455 06:10:46 -- nvmf/common.sh@123 -- # set -e 00:15:40.455 06:10:46 -- nvmf/common.sh@124 -- # return 0 00:15:40.455 06:10:46 -- nvmf/common.sh@477 -- # '[' -n 1117047 ']' 00:15:40.455 06:10:46 -- nvmf/common.sh@478 -- # killprocess 1117047 00:15:40.455 06:10:46 -- common/autotest_common.sh@926 -- # '[' -z 1117047 ']' 00:15:40.455 06:10:46 -- common/autotest_common.sh@930 -- # kill -0 1117047 00:15:40.455 06:10:46 -- common/autotest_common.sh@931 -- # uname 00:15:40.455 06:10:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:40.455 06:10:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1117047 00:15:40.712 06:10:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:40.712 06:10:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:40.712 06:10:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1117047' 00:15:40.712 killing process with pid 1117047 00:15:40.712 06:10:46 -- common/autotest_common.sh@945 -- # kill 1117047 00:15:40.712 06:10:46 -- common/autotest_common.sh@950 -- # wait 1117047 00:15:40.970 06:10:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:40.970 06:10:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:40.970 06:10:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:40.970 06:10:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:40.970 06:10:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:40.970 06:10:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.970 06:10:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.970 06:10:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:42.869 06:10:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:42.869 00:15:42.869 real 0m10.562s 00:15:42.869 user 0m25.113s 00:15:42.869 sys 0m2.357s 00:15:42.869 06:10:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:42.869 06:10:49 -- common/autotest_common.sh@10 -- # set +x 00:15:42.869 ************************************ 00:15:42.869 END TEST nvmf_nmic 00:15:42.869 ************************************ 00:15:42.869 06:10:49 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:42.869 06:10:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:42.869 06:10:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:42.869 06:10:49 -- common/autotest_common.sh@10 -- # set +x 00:15:42.869 ************************************ 00:15:42.869 START TEST nvmf_fio_target 00:15:42.869 ************************************ 00:15:42.869 06:10:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:43.126 * Looking for test storage... 00:15:43.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:43.126 06:10:49 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:43.126 06:10:49 -- nvmf/common.sh@7 -- # uname -s 00:15:43.126 06:10:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.126 06:10:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.126 06:10:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.126 06:10:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.126 06:10:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.126 06:10:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.126 06:10:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.126 06:10:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.126 06:10:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.126 06:10:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.126 06:10:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:43.126 06:10:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:43.126 06:10:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.126 06:10:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.126 06:10:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:43.126 06:10:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:43.126 06:10:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.126 06:10:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.126 06:10:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.126 06:10:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.126 06:10:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.126 06:10:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.126 06:10:49 -- paths/export.sh@5 -- # export PATH 00:15:43.126 06:10:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.126 06:10:49 -- nvmf/common.sh@46 -- # : 0 00:15:43.126 06:10:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:43.126 06:10:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:43.126 06:10:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:43.126 06:10:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.126 06:10:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.126 06:10:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:43.126 06:10:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:43.126 06:10:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:43.126 06:10:49 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:43.126 06:10:49 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:43.126 06:10:49 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:43.126 06:10:49 -- target/fio.sh@16 -- # nvmftestinit 00:15:43.126 06:10:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:43.126 06:10:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.126 06:10:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:43.126 06:10:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:43.126 06:10:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:43.126 06:10:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.126 06:10:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.126 06:10:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.126 06:10:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:43.127 06:10:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:43.127 06:10:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:43.127 06:10:49 -- common/autotest_common.sh@10 -- # set +x 00:15:45.027 06:10:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:45.027 06:10:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:45.027 06:10:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:45.027 06:10:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:45.027 06:10:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:45.027 06:10:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:45.027 06:10:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:45.027 06:10:51 -- nvmf/common.sh@294 -- # net_devs=() 00:15:45.027 06:10:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:45.027 06:10:51 -- nvmf/common.sh@295 -- # e810=() 00:15:45.027 06:10:51 -- nvmf/common.sh@295 -- # local -ga e810 00:15:45.027 06:10:51 -- nvmf/common.sh@296 -- # x722=() 00:15:45.027 06:10:51 -- nvmf/common.sh@296 -- # local -ga x722 00:15:45.027 06:10:51 -- nvmf/common.sh@297 -- # mlx=() 00:15:45.027 06:10:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:45.027 06:10:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:45.027 06:10:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:45.027 06:10:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:45.027 06:10:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:45.027 06:10:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:45.027 06:10:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:45.027 06:10:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:45.027 06:10:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:45.027 06:10:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:45.027 06:10:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:45.027 06:10:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:45.027 06:10:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:45.027 06:10:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:45.027 06:10:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:45.027 06:10:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:45.027 06:10:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:45.027 06:10:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:45.027 06:10:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:45.027 06:10:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:45.027 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:45.027 06:10:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:45.027 06:10:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:45.027 06:10:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.027 06:10:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.027 06:10:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:45.027 06:10:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:45.027 06:10:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:45.027 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:45.027 06:10:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:45.027 06:10:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:45.027 06:10:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:45.027 06:10:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:45.027 06:10:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:45.027 06:10:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:45.027 06:10:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:45.027 06:10:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:45.027 06:10:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:45.027 06:10:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.027 06:10:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:45.027 06:10:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.027 06:10:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:45.027 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:45.027 06:10:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.027 06:10:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:45.027 06:10:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:45.027 06:10:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:45.027 06:10:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:45.027 06:10:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:45.027 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:45.027 06:10:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:45.027 06:10:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:45.027 06:10:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:45.027 06:10:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:45.027 06:10:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:45.027 06:10:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:45.027 06:10:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:45.027 06:10:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:45.027 06:10:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:45.027 06:10:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:45.027 06:10:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:45.027 06:10:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:45.027 06:10:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:45.027 06:10:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:45.027 06:10:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:45.027 06:10:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:45.027 06:10:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:45.027 06:10:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:45.027 06:10:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:45.286 06:10:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:45.286 06:10:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:45.286 06:10:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:45.286 06:10:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:45.286 06:10:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:45.286 06:10:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:45.286 06:10:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:45.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:45.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:15:45.286 00:15:45.286 --- 10.0.0.2 ping statistics --- 00:15:45.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.286 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:15:45.286 06:10:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:45.286 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:45.286 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:15:45.286 00:15:45.286 --- 10.0.0.1 ping statistics --- 00:15:45.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:45.286 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:15:45.286 06:10:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:45.286 06:10:51 -- nvmf/common.sh@410 -- # return 0 00:15:45.286 06:10:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:45.286 06:10:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:45.286 06:10:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:45.286 06:10:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:45.286 06:10:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:45.286 06:10:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:45.286 06:10:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:45.286 06:10:51 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:45.286 06:10:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:45.286 06:10:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:45.286 06:10:51 -- common/autotest_common.sh@10 -- # set +x 00:15:45.286 06:10:51 -- nvmf/common.sh@469 -- # nvmfpid=1119792 00:15:45.286 06:10:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:45.286 06:10:51 -- nvmf/common.sh@470 -- # waitforlisten 1119792 00:15:45.286 06:10:51 -- common/autotest_common.sh@819 -- # '[' -z 1119792 ']' 00:15:45.286 06:10:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.286 06:10:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:45.286 06:10:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.286 06:10:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:45.286 06:10:51 -- common/autotest_common.sh@10 -- # set +x 00:15:45.286 [2024-07-13 06:10:51.692032] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:45.286 [2024-07-13 06:10:51.692105] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.286 EAL: No free 2048 kB hugepages reported on node 1 00:15:45.286 [2024-07-13 06:10:51.757771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:45.544 [2024-07-13 06:10:51.868156] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:45.544 [2024-07-13 06:10:51.868325] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.544 [2024-07-13 06:10:51.868345] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.544 [2024-07-13 06:10:51.868358] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.544 [2024-07-13 06:10:51.868455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:45.544 [2024-07-13 06:10:51.868505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.544 [2024-07-13 06:10:51.868554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:45.544 [2024-07-13 06:10:51.868557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:46.485 06:10:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:46.485 06:10:52 -- common/autotest_common.sh@852 -- # return 0 00:15:46.485 06:10:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:46.485 06:10:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:46.485 06:10:52 -- common/autotest_common.sh@10 -- # set +x 00:15:46.485 06:10:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.485 06:10:52 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:46.485 [2024-07-13 06:10:52.871084] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.485 06:10:52 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:46.744 06:10:53 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:46.744 06:10:53 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:47.001 06:10:53 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:47.001 06:10:53 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:47.258 06:10:53 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:47.258 06:10:53 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:47.515 06:10:53 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:47.515 06:10:53 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:47.771 06:10:54 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:48.043 06:10:54 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:48.043 06:10:54 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:48.323 06:10:54 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:48.323 06:10:54 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:48.581 06:10:54 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:48.581 06:10:54 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:48.838 06:10:55 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:49.096 06:10:55 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:49.096 06:10:55 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:49.354 06:10:55 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:49.354 06:10:55 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:49.354 06:10:55 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:49.612 [2024-07-13 06:10:56.080819] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.612 06:10:56 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:49.870 06:10:56 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:50.128 06:10:56 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:51.063 06:10:57 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:51.063 06:10:57 -- common/autotest_common.sh@1177 -- # local i=0 00:15:51.063 06:10:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:51.063 06:10:57 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:15:51.063 06:10:57 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:15:51.063 06:10:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:52.962 06:10:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:52.962 06:10:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:52.962 06:10:59 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:52.962 06:10:59 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:15:52.962 06:10:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:52.962 06:10:59 -- common/autotest_common.sh@1187 -- # return 0 00:15:52.962 06:10:59 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:52.962 [global] 00:15:52.962 thread=1 00:15:52.962 invalidate=1 00:15:52.962 rw=write 00:15:52.962 time_based=1 00:15:52.962 runtime=1 00:15:52.962 ioengine=libaio 00:15:52.962 direct=1 00:15:52.962 bs=4096 00:15:52.962 iodepth=1 00:15:52.962 norandommap=0 00:15:52.962 numjobs=1 00:15:52.962 00:15:52.962 verify_dump=1 00:15:52.962 verify_backlog=512 00:15:52.962 verify_state_save=0 00:15:52.962 do_verify=1 00:15:52.962 verify=crc32c-intel 00:15:52.962 [job0] 00:15:52.962 filename=/dev/nvme0n1 00:15:52.962 [job1] 00:15:52.962 filename=/dev/nvme0n2 00:15:52.962 [job2] 00:15:52.962 filename=/dev/nvme0n3 00:15:52.962 [job3] 00:15:52.962 filename=/dev/nvme0n4 00:15:52.962 Could not set queue depth (nvme0n1) 00:15:52.962 Could not set queue depth (nvme0n2) 00:15:52.962 Could not set queue depth (nvme0n3) 00:15:52.962 Could not set queue depth (nvme0n4) 00:15:53.221 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:53.221 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:53.221 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:53.221 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:53.221 fio-3.35 00:15:53.221 Starting 4 threads 00:15:54.596 00:15:54.596 job0: (groupid=0, jobs=1): err= 0: pid=1120896: Sat Jul 13 06:11:00 2024 00:15:54.596 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:15:54.596 slat (nsec): min=4381, max=64076, avg=16552.76, stdev=10473.34 00:15:54.596 clat (usec): min=234, max=41980, avg=650.15, stdev=3494.17 00:15:54.596 lat (usec): min=239, max=41993, avg=666.70, stdev=3494.03 00:15:54.596 clat percentiles (usec): 00:15:54.596 | 1.00th=[ 265], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 306], 00:15:54.596 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 338], 60.00th=[ 355], 00:15:54.596 | 70.00th=[ 363], 80.00th=[ 375], 90.00th=[ 388], 95.00th=[ 412], 00:15:54.596 | 99.00th=[ 519], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:15:54.596 | 99.99th=[42206] 00:15:54.596 write: IOPS=1224, BW=4899KiB/s (5017kB/s)(4904KiB/1001msec); 0 zone resets 00:15:54.596 slat (nsec): min=6486, max=47445, avg=14190.95, stdev=5469.67 00:15:54.596 clat (usec): min=165, max=461, avg=236.65, stdev=46.25 00:15:54.596 lat (usec): min=174, max=477, avg=250.84, stdev=46.42 00:15:54.596 clat percentiles (usec): 00:15:54.596 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 206], 00:15:54.596 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:15:54.596 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 306], 95.00th=[ 347], 00:15:54.596 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 453], 99.95th=[ 461], 00:15:54.596 | 99.99th=[ 461] 00:15:54.596 bw ( KiB/s): min= 4096, max= 4096, per=21.74%, avg=4096.00, stdev= 0.00, samples=1 00:15:54.596 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:54.596 lat (usec) : 250=43.64%, 500=55.78%, 750=0.18% 00:15:54.596 lat (msec) : 2=0.04%, 50=0.36% 00:15:54.596 cpu : usr=1.80%, sys=4.00%, ctx=2250, majf=0, minf=1 00:15:54.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:54.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.596 issued rwts: total=1024,1226,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:54.596 job1: (groupid=0, jobs=1): err= 0: pid=1120897: Sat Jul 13 06:11:00 2024 00:15:54.596 read: IOPS=141, BW=564KiB/s (578kB/s)(576KiB/1021msec) 00:15:54.596 slat (nsec): min=5660, max=38775, avg=17880.63, stdev=7498.56 00:15:54.596 clat (usec): min=284, max=41995, avg=6042.99, stdev=14163.95 00:15:54.596 lat (usec): min=296, max=42003, avg=6060.87, stdev=14164.14 00:15:54.596 clat percentiles (usec): 00:15:54.596 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 318], 20.00th=[ 334], 00:15:54.596 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 367], 60.00th=[ 388], 00:15:54.596 | 70.00th=[ 429], 80.00th=[ 482], 90.00th=[41157], 95.00th=[41157], 00:15:54.596 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:54.596 | 99.99th=[42206] 00:15:54.596 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:15:54.596 slat (usec): min=6, max=19043, avg=48.44, stdev=841.12 00:15:54.596 clat (usec): min=176, max=876, avg=235.63, stdev=66.06 00:15:54.596 lat (usec): min=186, max=19262, avg=284.07, stdev=843.00 00:15:54.596 clat percentiles (usec): 00:15:54.596 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 210], 00:15:54.596 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:15:54.596 | 70.00th=[ 239], 80.00th=[ 243], 90.00th=[ 251], 95.00th=[ 281], 00:15:54.596 | 99.00th=[ 635], 99.50th=[ 791], 99.90th=[ 881], 99.95th=[ 881], 00:15:54.596 | 99.99th=[ 881] 00:15:54.596 bw ( KiB/s): min= 4096, max= 4096, per=21.74%, avg=4096.00, stdev= 0.00, samples=1 00:15:54.596 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:54.596 lat (usec) : 250=69.66%, 500=25.00%, 750=1.52%, 1000=0.76% 00:15:54.596 lat (msec) : 50=3.05% 00:15:54.596 cpu : usr=0.98%, sys=0.59%, ctx=658, majf=0, minf=2 00:15:54.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:54.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.596 issued rwts: total=144,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:54.596 job2: (groupid=0, jobs=1): err= 0: pid=1120898: Sat Jul 13 06:11:00 2024 00:15:54.596 read: IOPS=1375, BW=5502KiB/s (5635kB/s)(5508KiB/1001msec) 00:15:54.596 slat (nsec): min=4378, max=61101, avg=17416.61, stdev=10448.73 00:15:54.596 clat (usec): min=252, max=723, avg=377.69, stdev=78.64 00:15:54.596 lat (usec): min=257, max=737, avg=395.11, stdev=82.09 00:15:54.596 clat percentiles (usec): 00:15:54.596 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 285], 20.00th=[ 297], 00:15:54.596 | 30.00th=[ 334], 40.00th=[ 351], 50.00th=[ 375], 60.00th=[ 383], 00:15:54.596 | 70.00th=[ 404], 80.00th=[ 437], 90.00th=[ 490], 95.00th=[ 523], 00:15:54.596 | 99.00th=[ 619], 99.50th=[ 660], 99.90th=[ 709], 99.95th=[ 725], 00:15:54.596 | 99.99th=[ 725] 00:15:54.596 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:54.596 slat (nsec): min=5952, max=83116, avg=15964.76, stdev=10990.02 00:15:54.596 clat (usec): min=181, max=2821, avg=272.00, stdev=108.54 00:15:54.596 lat (usec): min=187, max=2828, avg=287.96, stdev=111.13 00:15:54.596 clat percentiles (usec): 00:15:54.596 | 1.00th=[ 188], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 217], 00:15:54.596 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 243], 60.00th=[ 265], 00:15:54.596 | 70.00th=[ 293], 80.00th=[ 330], 90.00th=[ 367], 95.00th=[ 392], 00:15:54.596 | 99.00th=[ 461], 99.50th=[ 510], 99.90th=[ 2376], 99.95th=[ 2835], 00:15:54.596 | 99.99th=[ 2835] 00:15:54.596 bw ( KiB/s): min= 8192, max= 8192, per=43.47%, avg=8192.00, stdev= 0.00, samples=1 00:15:54.596 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:54.596 lat (usec) : 250=29.25%, 500=66.46%, 750=4.19% 00:15:54.596 lat (msec) : 2=0.03%, 4=0.07% 00:15:54.596 cpu : usr=2.80%, sys=4.70%, ctx=2913, majf=0, minf=1 00:15:54.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:54.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.596 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.596 issued rwts: total=1377,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.596 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:54.596 job3: (groupid=0, jobs=1): err= 0: pid=1120899: Sat Jul 13 06:11:00 2024 00:15:54.596 read: IOPS=1219, BW=4878KiB/s (4996kB/s)(4976KiB/1020msec) 00:15:54.596 slat (nsec): min=5115, max=35987, avg=11081.00, stdev=5423.17 00:15:54.596 clat (usec): min=253, max=41150, avg=475.14, stdev=2305.25 00:15:54.596 lat (usec): min=261, max=41160, avg=486.22, stdev=2306.00 00:15:54.596 clat percentiles (usec): 00:15:54.596 | 1.00th=[ 262], 5.00th=[ 269], 10.00th=[ 281], 20.00th=[ 293], 00:15:54.596 | 30.00th=[ 306], 40.00th=[ 318], 50.00th=[ 338], 60.00th=[ 355], 00:15:54.596 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 416], 95.00th=[ 437], 00:15:54.596 | 99.00th=[ 519], 99.50th=[ 709], 99.90th=[41157], 99.95th=[41157], 00:15:54.596 | 99.99th=[41157] 00:15:54.596 write: IOPS=1505, BW=6024KiB/s (6168kB/s)(6144KiB/1020msec); 0 zone resets 00:15:54.596 slat (usec): min=6, max=8842, avg=19.98, stdev=225.43 00:15:54.596 clat (usec): min=171, max=2664, avg=243.56, stdev=95.05 00:15:54.596 lat (usec): min=180, max=9065, avg=263.54, stdev=245.32 00:15:54.596 clat percentiles (usec): 00:15:54.596 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 200], 00:15:54.596 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 233], 00:15:54.596 | 70.00th=[ 245], 80.00th=[ 265], 90.00th=[ 318], 95.00th=[ 375], 00:15:54.596 | 99.00th=[ 437], 99.50th=[ 490], 99.90th=[ 1418], 99.95th=[ 2671], 00:15:54.596 | 99.99th=[ 2671] 00:15:54.596 bw ( KiB/s): min= 4096, max= 8192, per=32.60%, avg=6144.00, stdev=2896.31, samples=2 00:15:54.596 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:15:54.596 lat (usec) : 250=41.08%, 500=58.09%, 750=0.43%, 1000=0.11% 00:15:54.596 lat (msec) : 2=0.11%, 4=0.04%, 50=0.14% 00:15:54.596 cpu : usr=2.45%, sys=4.02%, ctx=2784, majf=0, minf=1 00:15:54.596 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:54.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.597 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.597 issued rwts: total=1244,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.597 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:54.597 00:15:54.597 Run status group 0 (all jobs): 00:15:54.597 READ: bw=14.5MiB/s (15.2MB/s), 564KiB/s-5502KiB/s (578kB/s-5635kB/s), io=14.8MiB (15.5MB), run=1001-1021msec 00:15:54.597 WRITE: bw=18.4MiB/s (19.3MB/s), 2006KiB/s-6138KiB/s (2054kB/s-6285kB/s), io=18.8MiB (19.7MB), run=1001-1021msec 00:15:54.597 00:15:54.597 Disk stats (read/write): 00:15:54.597 nvme0n1: ios=783/1024, merge=0/0, ticks=1205/243, in_queue=1448, util=99.00% 00:15:54.597 nvme0n2: ios=138/512, merge=0/0, ticks=1688/123, in_queue=1811, util=98.06% 00:15:54.597 nvme0n3: ios=1024/1508, merge=0/0, ticks=377/392, in_queue=769, util=88.98% 00:15:54.597 nvme0n4: ios=1185/1536, merge=0/0, ticks=1027/351, in_queue=1378, util=97.67% 00:15:54.597 06:11:00 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:15:54.597 [global] 00:15:54.597 thread=1 00:15:54.597 invalidate=1 00:15:54.597 rw=randwrite 00:15:54.597 time_based=1 00:15:54.597 runtime=1 00:15:54.597 ioengine=libaio 00:15:54.597 direct=1 00:15:54.597 bs=4096 00:15:54.597 iodepth=1 00:15:54.597 norandommap=0 00:15:54.597 numjobs=1 00:15:54.597 00:15:54.597 verify_dump=1 00:15:54.597 verify_backlog=512 00:15:54.597 verify_state_save=0 00:15:54.597 do_verify=1 00:15:54.597 verify=crc32c-intel 00:15:54.597 [job0] 00:15:54.597 filename=/dev/nvme0n1 00:15:54.597 [job1] 00:15:54.597 filename=/dev/nvme0n2 00:15:54.597 [job2] 00:15:54.597 filename=/dev/nvme0n3 00:15:54.597 [job3] 00:15:54.597 filename=/dev/nvme0n4 00:15:54.597 Could not set queue depth (nvme0n1) 00:15:54.597 Could not set queue depth (nvme0n2) 00:15:54.597 Could not set queue depth (nvme0n3) 00:15:54.597 Could not set queue depth (nvme0n4) 00:15:54.597 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:54.597 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:54.597 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:54.597 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:54.597 fio-3.35 00:15:54.597 Starting 4 threads 00:15:55.970 00:15:55.970 job0: (groupid=0, jobs=1): err= 0: pid=1121257: Sat Jul 13 06:11:02 2024 00:15:55.970 read: IOPS=21, BW=85.1KiB/s (87.1kB/s)(88.0KiB/1034msec) 00:15:55.970 slat (nsec): min=8629, max=34253, avg=16222.73, stdev=5389.80 00:15:55.970 clat (usec): min=40894, max=41052, avg=40983.04, stdev=35.60 00:15:55.970 lat (usec): min=40912, max=41061, avg=40999.26, stdev=33.85 00:15:55.970 clat percentiles (usec): 00:15:55.970 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:55.970 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:55.970 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:55.970 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:55.970 | 99.99th=[41157] 00:15:55.970 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:15:55.970 slat (nsec): min=9012, max=41899, avg=17225.77, stdev=7293.61 00:15:55.970 clat (usec): min=183, max=407, avg=234.49, stdev=26.96 00:15:55.970 lat (usec): min=193, max=417, avg=251.72, stdev=29.94 00:15:55.970 clat percentiles (usec): 00:15:55.970 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 215], 00:15:55.970 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:15:55.970 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 285], 00:15:55.970 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 408], 99.95th=[ 408], 00:15:55.970 | 99.99th=[ 408] 00:15:55.970 bw ( KiB/s): min= 4096, max= 4096, per=29.54%, avg=4096.00, stdev= 0.00, samples=1 00:15:55.970 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:55.970 lat (usec) : 250=76.22%, 500=19.66% 00:15:55.970 lat (msec) : 50=4.12% 00:15:55.970 cpu : usr=0.29%, sys=1.36%, ctx=536, majf=0, minf=2 00:15:55.970 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:55.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.970 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.970 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:55.970 job1: (groupid=0, jobs=1): err= 0: pid=1121258: Sat Jul 13 06:11:02 2024 00:15:55.970 read: IOPS=82, BW=331KiB/s (339kB/s)(332KiB/1002msec) 00:15:55.970 slat (nsec): min=5052, max=24483, avg=9787.99, stdev=3646.47 00:15:55.970 clat (usec): min=293, max=41065, avg=10625.27, stdev=17764.02 00:15:55.970 lat (usec): min=299, max=41078, avg=10635.06, stdev=17766.04 00:15:55.970 clat percentiles (usec): 00:15:55.970 | 1.00th=[ 293], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 306], 00:15:55.970 | 30.00th=[ 314], 40.00th=[ 330], 50.00th=[ 375], 60.00th=[ 392], 00:15:55.970 | 70.00th=[ 465], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:55.970 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:55.970 | 99.99th=[41157] 00:15:55.970 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:15:55.970 slat (nsec): min=7274, max=38437, avg=14378.61, stdev=6198.79 00:15:55.970 clat (usec): min=177, max=385, avg=213.04, stdev=25.26 00:15:55.970 lat (usec): min=185, max=393, avg=227.42, stdev=25.74 00:15:55.970 clat percentiles (usec): 00:15:55.970 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 198], 00:15:55.970 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 212], 00:15:55.970 | 70.00th=[ 217], 80.00th=[ 221], 90.00th=[ 231], 95.00th=[ 260], 00:15:55.970 | 99.00th=[ 322], 99.50th=[ 371], 99.90th=[ 388], 99.95th=[ 388], 00:15:55.970 | 99.99th=[ 388] 00:15:55.970 bw ( KiB/s): min= 4096, max= 4096, per=29.54%, avg=4096.00, stdev= 0.00, samples=1 00:15:55.970 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:55.970 lat (usec) : 250=81.51%, 500=14.79%, 750=0.17% 00:15:55.970 lat (msec) : 50=3.53% 00:15:55.970 cpu : usr=0.40%, sys=0.80%, ctx=596, majf=0, minf=1 00:15:55.970 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:55.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.970 issued rwts: total=83,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.970 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:55.970 job2: (groupid=0, jobs=1): err= 0: pid=1121259: Sat Jul 13 06:11:02 2024 00:15:55.970 read: IOPS=1255, BW=5023KiB/s (5144kB/s)(5028KiB/1001msec) 00:15:55.970 slat (nsec): min=5858, max=61634, avg=11890.00, stdev=5503.93 00:15:55.970 clat (usec): min=243, max=41881, avg=496.01, stdev=2812.06 00:15:55.970 lat (usec): min=249, max=41889, avg=507.90, stdev=2812.18 00:15:55.970 clat percentiles (usec): 00:15:55.970 | 1.00th=[ 253], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:15:55.970 | 30.00th=[ 285], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 306], 00:15:55.970 | 70.00th=[ 314], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 347], 00:15:55.970 | 99.00th=[ 490], 99.50th=[ 611], 99.90th=[41157], 99.95th=[41681], 00:15:55.970 | 99.99th=[41681] 00:15:55.970 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:15:55.970 slat (nsec): min=6393, max=54991, avg=14899.77, stdev=7401.82 00:15:55.970 clat (usec): min=167, max=480, avg=213.95, stdev=36.60 00:15:55.970 lat (usec): min=174, max=491, avg=228.85, stdev=41.21 00:15:55.970 clat percentiles (usec): 00:15:55.970 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 180], 20.00th=[ 186], 00:15:55.970 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 212], 00:15:55.970 | 70.00th=[ 225], 80.00th=[ 247], 90.00th=[ 269], 95.00th=[ 281], 00:15:55.970 | 99.00th=[ 314], 99.50th=[ 359], 99.90th=[ 433], 99.95th=[ 482], 00:15:55.971 | 99.99th=[ 482] 00:15:55.971 bw ( KiB/s): min= 4096, max= 4096, per=29.54%, avg=4096.00, stdev= 0.00, samples=1 00:15:55.971 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:55.971 lat (usec) : 250=44.75%, 500=54.85%, 750=0.18% 00:15:55.971 lat (msec) : 50=0.21% 00:15:55.971 cpu : usr=2.40%, sys=5.20%, ctx=2795, majf=0, minf=1 00:15:55.971 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:55.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.971 issued rwts: total=1257,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.971 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:55.971 job3: (groupid=0, jobs=1): err= 0: pid=1121260: Sat Jul 13 06:11:02 2024 00:15:55.971 read: IOPS=723, BW=2894KiB/s (2963kB/s)(2972KiB/1027msec) 00:15:55.971 slat (nsec): min=4195, max=34475, avg=8801.74, stdev=4187.55 00:15:55.971 clat (usec): min=238, max=41885, avg=1079.43, stdev=5543.39 00:15:55.971 lat (usec): min=243, max=41891, avg=1088.23, stdev=5544.32 00:15:55.971 clat percentiles (usec): 00:15:55.971 | 1.00th=[ 243], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 255], 00:15:55.971 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 293], 60.00th=[ 314], 00:15:55.971 | 70.00th=[ 347], 80.00th=[ 379], 90.00th=[ 388], 95.00th=[ 482], 00:15:55.971 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:15:55.971 | 99.99th=[41681] 00:15:55.971 write: IOPS=997, BW=3988KiB/s (4084kB/s)(4096KiB/1027msec); 0 zone resets 00:15:55.971 slat (nsec): min=5267, max=78062, avg=10276.87, stdev=6171.93 00:15:55.971 clat (usec): min=158, max=798, avg=197.86, stdev=44.34 00:15:55.971 lat (usec): min=164, max=814, avg=208.14, stdev=46.46 00:15:55.971 clat percentiles (usec): 00:15:55.971 | 1.00th=[ 163], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:15:55.971 | 30.00th=[ 178], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 200], 00:15:55.971 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 223], 95.00th=[ 247], 00:15:55.971 | 99.00th=[ 355], 99.50th=[ 404], 99.90th=[ 758], 99.95th=[ 799], 00:15:55.971 | 99.99th=[ 799] 00:15:55.971 bw ( KiB/s): min= 8192, max= 8192, per=59.09%, avg=8192.00, stdev= 0.00, samples=1 00:15:55.971 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:15:55.971 lat (usec) : 250=59.08%, 500=39.22%, 750=0.79%, 1000=0.11% 00:15:55.971 lat (msec) : 50=0.79% 00:15:55.971 cpu : usr=0.39%, sys=2.14%, ctx=1767, majf=0, minf=1 00:15:55.971 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:55.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.971 issued rwts: total=743,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.971 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:55.971 00:15:55.971 Run status group 0 (all jobs): 00:15:55.971 READ: bw=8143KiB/s (8339kB/s), 85.1KiB/s-5023KiB/s (87.1kB/s-5144kB/s), io=8420KiB (8622kB), run=1001-1034msec 00:15:55.971 WRITE: bw=13.5MiB/s (14.2MB/s), 1981KiB/s-6138KiB/s (2028kB/s-6285kB/s), io=14.0MiB (14.7MB), run=1001-1034msec 00:15:55.971 00:15:55.971 Disk stats (read/write): 00:15:55.971 nvme0n1: ios=43/512, merge=0/0, ticks=1653/119, in_queue=1772, util=98.20% 00:15:55.971 nvme0n2: ios=92/512, merge=0/0, ticks=728/107, in_queue=835, util=86.69% 00:15:55.971 nvme0n3: ios=1070/1194, merge=0/0, ticks=789/255, in_queue=1044, util=100.00% 00:15:55.971 nvme0n4: ios=763/1024, merge=0/0, ticks=786/192, in_queue=978, util=90.84% 00:15:55.971 06:11:02 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:55.971 [global] 00:15:55.971 thread=1 00:15:55.971 invalidate=1 00:15:55.971 rw=write 00:15:55.971 time_based=1 00:15:55.971 runtime=1 00:15:55.971 ioengine=libaio 00:15:55.971 direct=1 00:15:55.971 bs=4096 00:15:55.971 iodepth=128 00:15:55.971 norandommap=0 00:15:55.971 numjobs=1 00:15:55.971 00:15:55.971 verify_dump=1 00:15:55.971 verify_backlog=512 00:15:55.971 verify_state_save=0 00:15:55.971 do_verify=1 00:15:55.971 verify=crc32c-intel 00:15:55.971 [job0] 00:15:55.971 filename=/dev/nvme0n1 00:15:55.971 [job1] 00:15:55.971 filename=/dev/nvme0n2 00:15:55.971 [job2] 00:15:55.971 filename=/dev/nvme0n3 00:15:55.971 [job3] 00:15:55.971 filename=/dev/nvme0n4 00:15:55.971 Could not set queue depth (nvme0n1) 00:15:55.971 Could not set queue depth (nvme0n2) 00:15:55.971 Could not set queue depth (nvme0n3) 00:15:55.971 Could not set queue depth (nvme0n4) 00:15:55.971 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:55.971 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:55.971 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:55.971 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:55.971 fio-3.35 00:15:55.971 Starting 4 threads 00:15:57.342 00:15:57.342 job0: (groupid=0, jobs=1): err= 0: pid=1121490: Sat Jul 13 06:11:03 2024 00:15:57.342 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:15:57.342 slat (usec): min=2, max=9950, avg=154.00, stdev=854.12 00:15:57.342 clat (usec): min=4911, max=42521, avg=19679.23, stdev=6852.86 00:15:57.342 lat (usec): min=4926, max=42550, avg=19833.22, stdev=6883.62 00:15:57.342 clat percentiles (usec): 00:15:57.342 | 1.00th=[ 6194], 5.00th=[ 7570], 10.00th=[ 7963], 20.00th=[14091], 00:15:57.342 | 30.00th=[17433], 40.00th=[18482], 50.00th=[20317], 60.00th=[22414], 00:15:57.342 | 70.00th=[23462], 80.00th=[24773], 90.00th=[27132], 95.00th=[30278], 00:15:57.342 | 99.00th=[37487], 99.50th=[37487], 99.90th=[42730], 99.95th=[42730], 00:15:57.342 | 99.99th=[42730] 00:15:57.342 write: IOPS=2924, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1004msec); 0 zone resets 00:15:57.342 slat (usec): min=3, max=20856, avg=193.61, stdev=1232.73 00:15:57.342 clat (usec): min=1611, max=95214, avg=25328.07, stdev=16454.21 00:15:57.342 lat (usec): min=1633, max=95222, avg=25521.67, stdev=16536.52 00:15:57.342 clat percentiles (usec): 00:15:57.342 | 1.00th=[ 4752], 5.00th=[ 6063], 10.00th=[ 8029], 20.00th=[11076], 00:15:57.342 | 30.00th=[14484], 40.00th=[18482], 50.00th=[24511], 60.00th=[26346], 00:15:57.342 | 70.00th=[27395], 80.00th=[34341], 90.00th=[49546], 95.00th=[57934], 00:15:57.342 | 99.00th=[85459], 99.50th=[91751], 99.90th=[94897], 99.95th=[94897], 00:15:57.342 | 99.99th=[94897] 00:15:57.342 bw ( KiB/s): min=10176, max=12288, per=17.03%, avg=11232.00, stdev=1493.41, samples=2 00:15:57.342 iops : min= 2544, max= 3072, avg=2808.00, stdev=373.35, samples=2 00:15:57.342 lat (msec) : 2=0.15%, 4=0.15%, 10=15.05%, 20=31.13%, 50=48.38% 00:15:57.342 lat (msec) : 100=5.15% 00:15:57.342 cpu : usr=2.49%, sys=3.99%, ctx=270, majf=0, minf=13 00:15:57.342 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:15:57.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.342 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:57.342 issued rwts: total=2560,2936,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.342 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:57.342 job1: (groupid=0, jobs=1): err= 0: pid=1121491: Sat Jul 13 06:11:03 2024 00:15:57.342 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:15:57.342 slat (usec): min=2, max=6458, avg=101.82, stdev=551.50 00:15:57.342 clat (usec): min=3832, max=34384, avg=13712.43, stdev=4681.31 00:15:57.342 lat (usec): min=3839, max=34425, avg=13814.25, stdev=4719.11 00:15:57.342 clat percentiles (usec): 00:15:57.342 | 1.00th=[ 5669], 5.00th=[10028], 10.00th=[10552], 20.00th=[11076], 00:15:57.342 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[12256], 00:15:57.342 | 70.00th=[13304], 80.00th=[17171], 90.00th=[21103], 95.00th=[25297], 00:15:57.342 | 99.00th=[28443], 99.50th=[28705], 99.90th=[31065], 99.95th=[32113], 00:15:57.342 | 99.99th=[34341] 00:15:57.342 write: IOPS=4986, BW=19.5MiB/s (20.4MB/s)(19.5MiB/1002msec); 0 zone resets 00:15:57.342 slat (usec): min=4, max=9478, avg=96.48, stdev=512.18 00:15:57.342 clat (usec): min=481, max=23867, avg=12735.83, stdev=4033.07 00:15:57.342 lat (usec): min=3322, max=23875, avg=12832.31, stdev=4057.27 00:15:57.342 clat percentiles (usec): 00:15:57.342 | 1.00th=[ 4424], 5.00th=[ 7242], 10.00th=[ 8455], 20.00th=[10683], 00:15:57.342 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:15:57.343 | 70.00th=[12649], 80.00th=[15139], 90.00th=[19530], 95.00th=[22414], 00:15:57.343 | 99.00th=[23725], 99.50th=[23725], 99.90th=[23725], 99.95th=[23725], 00:15:57.343 | 99.99th=[23987] 00:15:57.343 bw ( KiB/s): min=18472, max=20480, per=29.53%, avg=19476.00, stdev=1419.87, samples=2 00:15:57.343 iops : min= 4618, max= 5120, avg=4869.00, stdev=354.97, samples=2 00:15:57.343 lat (usec) : 500=0.01% 00:15:57.343 lat (msec) : 4=0.30%, 10=11.33%, 20=78.22%, 50=10.14% 00:15:57.343 cpu : usr=6.59%, sys=9.19%, ctx=453, majf=0, minf=9 00:15:57.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:57.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:57.343 issued rwts: total=4608,4996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:57.343 job2: (groupid=0, jobs=1): err= 0: pid=1121494: Sat Jul 13 06:11:03 2024 00:15:57.343 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:15:57.343 slat (usec): min=3, max=12573, avg=98.04, stdev=738.33 00:15:57.343 clat (usec): min=5086, max=26326, avg=13204.50, stdev=3166.62 00:15:57.343 lat (usec): min=5094, max=26342, avg=13302.54, stdev=3207.16 00:15:57.343 clat percentiles (usec): 00:15:57.343 | 1.00th=[ 7767], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10683], 00:15:57.343 | 30.00th=[11207], 40.00th=[11994], 50.00th=[12780], 60.00th=[13698], 00:15:57.343 | 70.00th=[13960], 80.00th=[15270], 90.00th=[17957], 95.00th=[20055], 00:15:57.343 | 99.00th=[22152], 99.50th=[22414], 99.90th=[23987], 99.95th=[24773], 00:15:57.343 | 99.99th=[26346] 00:15:57.343 write: IOPS=4726, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1005msec); 0 zone resets 00:15:57.343 slat (usec): min=4, max=63624, avg=105.80, stdev=1152.58 00:15:57.343 clat (usec): min=1301, max=74511, avg=11646.72, stdev=4057.52 00:15:57.343 lat (usec): min=3029, max=98238, avg=11752.52, stdev=4265.93 00:15:57.343 clat percentiles (usec): 00:15:57.343 | 1.00th=[ 4948], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7767], 00:15:57.343 | 30.00th=[ 9241], 40.00th=[11076], 50.00th=[11469], 60.00th=[12125], 00:15:57.343 | 70.00th=[12518], 80.00th=[13435], 90.00th=[16909], 95.00th=[20055], 00:15:57.343 | 99.00th=[24511], 99.50th=[24773], 99.90th=[24773], 99.95th=[24773], 00:15:57.343 | 99.99th=[74974] 00:15:57.343 bw ( KiB/s): min=16384, max=20856, per=28.23%, avg=18620.00, stdev=3162.18, samples=2 00:15:57.343 iops : min= 4096, max= 5214, avg=4655.00, stdev=790.55, samples=2 00:15:57.343 lat (msec) : 2=0.01%, 4=0.17%, 10=21.17%, 20=73.20%, 50=5.44% 00:15:57.343 lat (msec) : 100=0.01% 00:15:57.343 cpu : usr=4.38%, sys=11.06%, ctx=347, majf=0, minf=13 00:15:57.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:57.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:57.343 issued rwts: total=4608,4750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:57.343 job3: (groupid=0, jobs=1): err= 0: pid=1121495: Sat Jul 13 06:11:03 2024 00:15:57.343 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:15:57.343 slat (usec): min=3, max=18866, avg=128.02, stdev=718.12 00:15:57.343 clat (usec): min=8735, max=37882, avg=15333.83, stdev=4224.72 00:15:57.343 lat (usec): min=8746, max=37888, avg=15461.84, stdev=4290.14 00:15:57.343 clat percentiles (usec): 00:15:57.343 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[11863], 20.00th=[12125], 00:15:57.343 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12911], 60.00th=[14615], 00:15:57.343 | 70.00th=[19006], 80.00th=[20317], 90.00th=[20841], 95.00th=[22676], 00:15:57.343 | 99.00th=[25297], 99.50th=[28181], 99.90th=[30278], 99.95th=[38011], 00:15:57.343 | 99.99th=[38011] 00:15:57.343 write: IOPS=3874, BW=15.1MiB/s (15.9MB/s)(15.2MiB/1004msec); 0 zone resets 00:15:57.343 slat (usec): min=4, max=18491, avg=130.53, stdev=798.67 00:15:57.343 clat (usec): min=1460, max=64457, avg=18447.63, stdev=9407.15 00:15:57.343 lat (usec): min=6366, max=64475, avg=18578.15, stdev=9481.18 00:15:57.343 clat percentiles (usec): 00:15:57.343 | 1.00th=[ 6915], 5.00th=[11600], 10.00th=[12125], 20.00th=[12518], 00:15:57.343 | 30.00th=[12649], 40.00th=[12911], 50.00th=[14615], 60.00th=[16319], 00:15:57.343 | 70.00th=[17957], 80.00th=[23725], 90.00th=[32900], 95.00th=[39584], 00:15:57.343 | 99.00th=[55313], 99.50th=[55313], 99.90th=[64226], 99.95th=[64226], 00:15:57.343 | 99.99th=[64226] 00:15:57.343 bw ( KiB/s): min=13712, max=16384, per=22.81%, avg=15048.00, stdev=1889.39, samples=2 00:15:57.343 iops : min= 3428, max= 4096, avg=3762.00, stdev=472.35, samples=2 00:15:57.343 lat (msec) : 2=0.01%, 10=3.10%, 20=72.50%, 50=23.72%, 100=0.66% 00:15:57.343 cpu : usr=4.69%, sys=6.88%, ctx=404, majf=0, minf=15 00:15:57.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:57.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:57.343 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:57.343 issued rwts: total=3584,3890,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:57.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:57.343 00:15:57.343 Run status group 0 (all jobs): 00:15:57.343 READ: bw=59.7MiB/s (62.6MB/s), 9.96MiB/s-18.0MiB/s (10.4MB/s-18.8MB/s), io=60.0MiB (62.9MB), run=1002-1005msec 00:15:57.343 WRITE: bw=64.4MiB/s (67.5MB/s), 11.4MiB/s-19.5MiB/s (12.0MB/s-20.4MB/s), io=64.7MiB (67.9MB), run=1002-1005msec 00:15:57.343 00:15:57.343 Disk stats (read/write): 00:15:57.343 nvme0n1: ios=2228/2560, merge=0/0, ticks=14755/27667, in_queue=42422, util=91.78% 00:15:57.343 nvme0n2: ios=3991/4096, merge=0/0, ticks=23091/23530, in_queue=46621, util=97.97% 00:15:57.343 nvme0n3: ios=3644/3983, merge=0/0, ticks=46513/45881, in_queue=92394, util=95.72% 00:15:57.343 nvme0n4: ios=3129/3277, merge=0/0, ticks=14952/19290, in_queue=34242, util=94.95% 00:15:57.343 06:11:03 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:57.343 [global] 00:15:57.343 thread=1 00:15:57.343 invalidate=1 00:15:57.343 rw=randwrite 00:15:57.343 time_based=1 00:15:57.343 runtime=1 00:15:57.343 ioengine=libaio 00:15:57.343 direct=1 00:15:57.343 bs=4096 00:15:57.343 iodepth=128 00:15:57.343 norandommap=0 00:15:57.343 numjobs=1 00:15:57.343 00:15:57.343 verify_dump=1 00:15:57.343 verify_backlog=512 00:15:57.343 verify_state_save=0 00:15:57.343 do_verify=1 00:15:57.343 verify=crc32c-intel 00:15:57.343 [job0] 00:15:57.343 filename=/dev/nvme0n1 00:15:57.343 [job1] 00:15:57.343 filename=/dev/nvme0n2 00:15:57.343 [job2] 00:15:57.343 filename=/dev/nvme0n3 00:15:57.343 [job3] 00:15:57.343 filename=/dev/nvme0n4 00:15:57.343 Could not set queue depth (nvme0n1) 00:15:57.343 Could not set queue depth (nvme0n2) 00:15:57.343 Could not set queue depth (nvme0n3) 00:15:57.343 Could not set queue depth (nvme0n4) 00:15:57.601 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:57.601 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:57.601 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:57.601 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:57.601 fio-3.35 00:15:57.601 Starting 4 threads 00:15:58.977 00:15:58.977 job0: (groupid=0, jobs=1): err= 0: pid=1121732: Sat Jul 13 06:11:05 2024 00:15:58.977 read: IOPS=4879, BW=19.1MiB/s (20.0MB/s)(19.9MiB/1046msec) 00:15:58.977 slat (usec): min=2, max=28520, avg=105.36, stdev=931.22 00:15:58.977 clat (usec): min=3134, max=97560, avg=14765.36, stdev=14780.37 00:15:58.977 lat (msec): min=3, max=124, avg=14.87, stdev=14.88 00:15:58.977 clat percentiles (usec): 00:15:58.977 | 1.00th=[ 5538], 5.00th=[ 7111], 10.00th=[ 7701], 20.00th=[ 8160], 00:15:58.977 | 30.00th=[ 8979], 40.00th=[ 9896], 50.00th=[10683], 60.00th=[11207], 00:15:58.977 | 70.00th=[13173], 80.00th=[15664], 90.00th=[19530], 95.00th=[49021], 00:15:58.977 | 99.00th=[77071], 99.50th=[95945], 99.90th=[95945], 99.95th=[95945], 00:15:58.977 | 99.99th=[98042] 00:15:58.977 write: IOPS=4894, BW=19.1MiB/s (20.0MB/s)(20.0MiB/1046msec); 0 zone resets 00:15:58.977 slat (usec): min=4, max=17787, avg=82.27, stdev=586.09 00:15:58.977 clat (usec): min=2353, max=37280, avg=11186.05, stdev=4555.36 00:15:58.977 lat (usec): min=2361, max=37301, avg=11268.32, stdev=4584.41 00:15:58.977 clat percentiles (usec): 00:15:58.977 | 1.00th=[ 3556], 5.00th=[ 4752], 10.00th=[ 5604], 20.00th=[ 7570], 00:15:58.977 | 30.00th=[ 9110], 40.00th=[10290], 50.00th=[10945], 60.00th=[11731], 00:15:58.977 | 70.00th=[12780], 80.00th=[13435], 90.00th=[16450], 95.00th=[21103], 00:15:58.977 | 99.00th=[26346], 99.50th=[26608], 99.90th=[27132], 99.95th=[27132], 00:15:58.977 | 99.99th=[37487] 00:15:58.977 bw ( KiB/s): min=20480, max=20480, per=35.92%, avg=20480.00, stdev= 0.00, samples=2 00:15:58.977 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:15:58.977 lat (msec) : 4=0.90%, 10=39.68%, 20=51.52%, 50=5.43%, 100=2.47% 00:15:58.977 cpu : usr=4.21%, sys=8.04%, ctx=448, majf=0, minf=11 00:15:58.977 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:15:58.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:58.977 issued rwts: total=5104,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.977 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:58.977 job1: (groupid=0, jobs=1): err= 0: pid=1121733: Sat Jul 13 06:11:05 2024 00:15:58.977 read: IOPS=1437, BW=5750KiB/s (5888kB/s)(6020KiB/1047msec) 00:15:58.977 slat (usec): min=3, max=42071, avg=353.87, stdev=2755.25 00:15:58.977 clat (msec): min=8, max=134, avg=46.14, stdev=42.89 00:15:58.977 lat (msec): min=8, max=134, avg=46.49, stdev=43.15 00:15:58.977 clat percentiles (msec): 00:15:58.977 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:15:58.977 | 30.00th=[ 13], 40.00th=[ 17], 50.00th=[ 22], 60.00th=[ 33], 00:15:58.977 | 70.00th=[ 59], 80.00th=[ 105], 90.00th=[ 116], 95.00th=[ 121], 00:15:58.977 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 136], 99.95th=[ 136], 00:15:58.977 | 99.99th=[ 136] 00:15:58.977 write: IOPS=1467, BW=5868KiB/s (6009kB/s)(6144KiB/1047msec); 0 zone resets 00:15:58.978 slat (usec): min=4, max=23307, avg=298.03, stdev=1915.10 00:15:58.978 clat (usec): min=7395, max=97242, avg=40397.87, stdev=22636.94 00:15:58.978 lat (usec): min=8424, max=97253, avg=40695.90, stdev=22683.29 00:15:58.978 clat percentiles (usec): 00:15:58.978 | 1.00th=[ 8586], 5.00th=[11207], 10.00th=[13698], 20.00th=[19268], 00:15:58.978 | 30.00th=[22152], 40.00th=[27395], 50.00th=[32900], 60.00th=[45876], 00:15:58.978 | 70.00th=[58983], 80.00th=[64750], 90.00th=[68682], 95.00th=[72877], 00:15:58.978 | 99.00th=[96994], 99.50th=[96994], 99.90th=[96994], 99.95th=[96994], 00:15:58.978 | 99.99th=[96994] 00:15:58.978 bw ( KiB/s): min= 2552, max= 9736, per=10.78%, avg=6144.00, stdev=5079.86, samples=2 00:15:58.978 iops : min= 638, max= 2434, avg=1536.00, stdev=1269.96, samples=2 00:15:58.978 lat (msec) : 10=6.81%, 20=26.54%, 50=29.27%, 100=25.16%, 250=12.23% 00:15:58.978 cpu : usr=1.24%, sys=3.06%, ctx=125, majf=0, minf=15 00:15:58.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:15:58.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:58.978 issued rwts: total=1505,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:58.978 job2: (groupid=0, jobs=1): err= 0: pid=1121734: Sat Jul 13 06:11:05 2024 00:15:58.978 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:15:58.978 slat (usec): min=2, max=10323, avg=100.98, stdev=738.19 00:15:58.978 clat (usec): min=4730, max=26077, avg=13783.65, stdev=3026.91 00:15:58.978 lat (usec): min=4745, max=26081, avg=13884.63, stdev=3065.92 00:15:58.978 clat percentiles (usec): 00:15:58.978 | 1.00th=[ 8094], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11338], 00:15:58.978 | 30.00th=[12256], 40.00th=[12518], 50.00th=[13173], 60.00th=[13829], 00:15:58.978 | 70.00th=[15008], 80.00th=[15926], 90.00th=[17957], 95.00th=[19792], 00:15:58.978 | 99.00th=[21365], 99.50th=[25297], 99.90th=[26084], 99.95th=[26084], 00:15:58.978 | 99.99th=[26084] 00:15:58.978 write: IOPS=4659, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1005msec); 0 zone resets 00:15:58.978 slat (usec): min=3, max=28607, avg=103.32, stdev=838.47 00:15:58.978 clat (usec): min=1959, max=54141, avg=13535.25, stdev=7274.75 00:15:58.978 lat (usec): min=3932, max=54154, avg=13638.58, stdev=7311.20 00:15:58.978 clat percentiles (usec): 00:15:58.978 | 1.00th=[ 5735], 5.00th=[ 6783], 10.00th=[ 7570], 20.00th=[ 8455], 00:15:58.978 | 30.00th=[ 9503], 40.00th=[11469], 50.00th=[12256], 60.00th=[12780], 00:15:58.978 | 70.00th=[14877], 80.00th=[16057], 90.00th=[19530], 95.00th=[25035], 00:15:58.978 | 99.00th=[47973], 99.50th=[53740], 99.90th=[53740], 99.95th=[54264], 00:15:58.978 | 99.99th=[54264] 00:15:58.978 bw ( KiB/s): min=16432, max=20480, per=32.37%, avg=18456.00, stdev=2862.37, samples=2 00:15:58.978 iops : min= 4108, max= 5120, avg=4614.00, stdev=715.59, samples=2 00:15:58.978 lat (msec) : 2=0.01%, 4=0.06%, 10=19.40%, 20=73.42%, 50=6.73% 00:15:58.978 lat (msec) : 100=0.39% 00:15:58.978 cpu : usr=4.48%, sys=7.07%, ctx=252, majf=0, minf=13 00:15:58.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:58.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:58.978 issued rwts: total=4608,4683,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:58.978 job3: (groupid=0, jobs=1): err= 0: pid=1121735: Sat Jul 13 06:11:05 2024 00:15:58.978 read: IOPS=3252, BW=12.7MiB/s (13.3MB/s)(12.7MiB/1002msec) 00:15:58.978 slat (usec): min=2, max=35576, avg=177.71, stdev=1540.96 00:15:58.978 clat (usec): min=838, max=112945, avg=22299.07, stdev=21954.06 00:15:58.978 lat (msec): min=4, max=112, avg=22.48, stdev=22.14 00:15:58.978 clat percentiles (msec): 00:15:58.978 | 1.00th=[ 6], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:15:58.978 | 30.00th=[ 12], 40.00th=[ 13], 50.00th=[ 13], 60.00th=[ 14], 00:15:58.978 | 70.00th=[ 14], 80.00th=[ 22], 90.00th=[ 66], 95.00th=[ 71], 00:15:58.978 | 99.00th=[ 93], 99.50th=[ 93], 99.90th=[ 97], 99.95th=[ 109], 00:15:58.978 | 99.99th=[ 113] 00:15:58.978 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:15:58.978 slat (usec): min=3, max=18278, avg=110.14, stdev=730.41 00:15:58.978 clat (usec): min=5674, max=82210, avg=15054.47, stdev=9661.01 00:15:58.978 lat (usec): min=5680, max=82226, avg=15164.62, stdev=9694.39 00:15:58.978 clat percentiles (usec): 00:15:58.978 | 1.00th=[ 6980], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[10290], 00:15:58.978 | 30.00th=[10814], 40.00th=[11994], 50.00th=[12518], 60.00th=[12911], 00:15:58.978 | 70.00th=[15008], 80.00th=[16319], 90.00th=[23200], 95.00th=[26084], 00:15:58.978 | 99.00th=[68682], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:15:58.978 | 99.99th=[82314] 00:15:58.978 bw ( KiB/s): min=12288, max=16384, per=25.15%, avg=14336.00, stdev=2896.31, samples=2 00:15:58.978 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:15:58.978 lat (usec) : 1000=0.01% 00:15:58.978 lat (msec) : 10=11.11%, 20=71.65%, 50=8.91%, 100=8.27%, 250=0.04% 00:15:58.978 cpu : usr=3.70%, sys=4.90%, ctx=303, majf=0, minf=11 00:15:58.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:58.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:58.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:58.978 issued rwts: total=3259,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:58.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:58.978 00:15:58.978 Run status group 0 (all jobs): 00:15:58.978 READ: bw=54.0MiB/s (56.6MB/s), 5750KiB/s-19.1MiB/s (5888kB/s-20.0MB/s), io=56.5MiB (59.3MB), run=1002-1047msec 00:15:58.978 WRITE: bw=55.7MiB/s (58.4MB/s), 5868KiB/s-19.1MiB/s (6009kB/s-20.0MB/s), io=58.3MiB (61.1MB), run=1002-1047msec 00:15:58.978 00:15:58.978 Disk stats (read/write): 00:15:58.978 nvme0n1: ios=4664/5055, merge=0/0, ticks=49409/54569, in_queue=103978, util=87.47% 00:15:58.978 nvme0n2: ios=1329/1536, merge=0/0, ticks=12430/14431, in_queue=26861, util=89.75% 00:15:58.978 nvme0n3: ios=3648/4044, merge=0/0, ticks=43559/43673, in_queue=87232, util=95.00% 00:15:58.978 nvme0n4: ios=2582/2560, merge=0/0, ticks=23350/13053, in_queue=36403, util=95.38% 00:15:58.978 06:11:05 -- target/fio.sh@55 -- # sync 00:15:58.978 06:11:05 -- target/fio.sh@59 -- # fio_pid=1121875 00:15:58.978 06:11:05 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:58.978 06:11:05 -- target/fio.sh@61 -- # sleep 3 00:15:58.978 [global] 00:15:58.978 thread=1 00:15:58.978 invalidate=1 00:15:58.978 rw=read 00:15:58.978 time_based=1 00:15:58.978 runtime=10 00:15:58.978 ioengine=libaio 00:15:58.978 direct=1 00:15:58.978 bs=4096 00:15:58.978 iodepth=1 00:15:58.978 norandommap=1 00:15:58.978 numjobs=1 00:15:58.978 00:15:58.978 [job0] 00:15:58.978 filename=/dev/nvme0n1 00:15:58.978 [job1] 00:15:58.978 filename=/dev/nvme0n2 00:15:58.978 [job2] 00:15:58.978 filename=/dev/nvme0n3 00:15:58.978 [job3] 00:15:58.978 filename=/dev/nvme0n4 00:15:58.978 Could not set queue depth (nvme0n1) 00:15:58.978 Could not set queue depth (nvme0n2) 00:15:58.978 Could not set queue depth (nvme0n3) 00:15:58.978 Could not set queue depth (nvme0n4) 00:15:58.978 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:58.978 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:58.978 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:58.978 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:58.978 fio-3.35 00:15:58.978 Starting 4 threads 00:16:02.254 06:11:08 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:02.254 06:11:08 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:02.254 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=30748672, buflen=4096 00:16:02.254 fio: pid=1122048, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:02.254 06:11:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:02.254 06:11:08 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:02.254 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=315392, buflen=4096 00:16:02.254 fio: pid=1122037, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:02.510 06:11:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:02.510 06:11:08 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:02.510 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=1757184, buflen=4096 00:16:02.510 fio: pid=1121984, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:02.767 06:11:09 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:02.767 06:11:09 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:02.767 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=13058048, buflen=4096 00:16:02.767 fio: pid=1121999, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:02.767 00:16:02.767 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1121984: Sat Jul 13 06:11:09 2024 00:16:02.767 read: IOPS=125, BW=502KiB/s (514kB/s)(1716KiB/3418msec) 00:16:02.767 slat (usec): min=4, max=11870, avg=45.07, stdev=571.69 00:16:02.767 clat (usec): min=298, max=45002, avg=7865.39, stdev=15773.33 00:16:02.767 lat (usec): min=313, max=53165, avg=7910.52, stdev=15843.00 00:16:02.767 clat percentiles (usec): 00:16:02.767 | 1.00th=[ 306], 5.00th=[ 318], 10.00th=[ 334], 20.00th=[ 351], 00:16:02.767 | 30.00th=[ 367], 40.00th=[ 375], 50.00th=[ 388], 60.00th=[ 400], 00:16:02.767 | 70.00th=[ 424], 80.00th=[ 506], 90.00th=[41157], 95.00th=[41157], 00:16:02.767 | 99.00th=[41157], 99.50th=[41157], 99.90th=[44827], 99.95th=[44827], 00:16:02.767 | 99.99th=[44827] 00:16:02.767 bw ( KiB/s): min= 96, max= 2864, per=4.56%, avg=558.67, stdev=1129.38, samples=6 00:16:02.767 iops : min= 24, max= 716, avg=139.67, stdev=282.35, samples=6 00:16:02.767 lat (usec) : 500=79.53%, 750=1.86% 00:16:02.767 lat (msec) : 50=18.37% 00:16:02.767 cpu : usr=0.18%, sys=0.18%, ctx=434, majf=0, minf=1 00:16:02.767 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:02.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.767 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.767 issued rwts: total=430,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.767 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:02.767 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1121999: Sat Jul 13 06:11:09 2024 00:16:02.767 read: IOPS=871, BW=3484KiB/s (3568kB/s)(12.5MiB/3660msec) 00:16:02.767 slat (usec): min=4, max=12834, avg=18.13, stdev=227.21 00:16:02.767 clat (usec): min=289, max=41407, avg=1119.17, stdev=5452.02 00:16:02.767 lat (usec): min=294, max=43539, avg=1137.30, stdev=5479.29 00:16:02.767 clat percentiles (usec): 00:16:02.767 | 1.00th=[ 302], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 322], 00:16:02.767 | 30.00th=[ 330], 40.00th=[ 351], 50.00th=[ 375], 60.00th=[ 379], 00:16:02.767 | 70.00th=[ 388], 80.00th=[ 408], 90.00th=[ 461], 95.00th=[ 486], 00:16:02.767 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:16:02.767 | 99.99th=[41157] 00:16:02.767 bw ( KiB/s): min= 96, max=10584, per=29.53%, avg=3615.43, stdev=4312.19, samples=7 00:16:02.767 iops : min= 24, max= 2646, avg=903.86, stdev=1078.05, samples=7 00:16:02.767 lat (usec) : 500=96.21%, 750=1.79%, 1000=0.09% 00:16:02.767 lat (msec) : 2=0.03%, 50=1.85% 00:16:02.767 cpu : usr=0.60%, sys=1.34%, ctx=3191, majf=0, minf=1 00:16:02.767 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:02.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.767 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.767 issued rwts: total=3189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.767 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:02.767 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1122037: Sat Jul 13 06:11:09 2024 00:16:02.767 read: IOPS=24, BW=98.1KiB/s (100kB/s)(308KiB/3140msec) 00:16:02.767 slat (nsec): min=9003, max=44563, avg=19974.40, stdev=8379.53 00:16:02.767 clat (usec): min=491, max=41935, avg=40462.87, stdev=4616.57 00:16:02.767 lat (usec): min=535, max=41970, avg=40482.63, stdev=4613.72 00:16:02.767 clat percentiles (usec): 00:16:02.767 | 1.00th=[ 490], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:02.767 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:02.767 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:16:02.767 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:02.767 | 99.99th=[41681] 00:16:02.767 bw ( KiB/s): min= 96, max= 104, per=0.80%, avg=98.67, stdev= 4.13, samples=6 00:16:02.767 iops : min= 24, max= 26, avg=24.67, stdev= 1.03, samples=6 00:16:02.767 lat (usec) : 500=1.28% 00:16:02.767 lat (msec) : 50=97.44% 00:16:02.767 cpu : usr=0.10%, sys=0.00%, ctx=78, majf=0, minf=1 00:16:02.767 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:02.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.767 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.767 issued rwts: total=78,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.767 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:02.767 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1122048: Sat Jul 13 06:11:09 2024 00:16:02.767 read: IOPS=2604, BW=10.2MiB/s (10.7MB/s)(29.3MiB/2883msec) 00:16:02.767 slat (nsec): min=5126, max=61861, avg=10736.41, stdev=5492.81 00:16:02.767 clat (usec): min=287, max=41023, avg=367.44, stdev=469.98 00:16:02.767 lat (usec): min=295, max=41031, avg=378.17, stdev=470.09 00:16:02.767 clat percentiles (usec): 00:16:02.767 | 1.00th=[ 318], 5.00th=[ 334], 10.00th=[ 338], 20.00th=[ 343], 00:16:02.767 | 30.00th=[ 347], 40.00th=[ 355], 50.00th=[ 359], 60.00th=[ 367], 00:16:02.767 | 70.00th=[ 371], 80.00th=[ 379], 90.00th=[ 388], 95.00th=[ 400], 00:16:02.767 | 99.00th=[ 453], 99.50th=[ 469], 99.90th=[ 545], 99.95th=[ 562], 00:16:02.767 | 99.99th=[41157] 00:16:02.767 bw ( KiB/s): min=10128, max=11032, per=86.88%, avg=10636.80, stdev=342.16, samples=5 00:16:02.767 iops : min= 2532, max= 2758, avg=2659.20, stdev=85.54, samples=5 00:16:02.767 lat (usec) : 500=99.75%, 750=0.21%, 1000=0.01% 00:16:02.767 lat (msec) : 50=0.01% 00:16:02.767 cpu : usr=2.08%, sys=4.13%, ctx=7508, majf=0, minf=1 00:16:02.767 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:02.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.767 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.767 issued rwts: total=7508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.767 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:02.767 00:16:02.767 Run status group 0 (all jobs): 00:16:02.767 READ: bw=12.0MiB/s (12.5MB/s), 98.1KiB/s-10.2MiB/s (100kB/s-10.7MB/s), io=43.8MiB (45.9MB), run=2883-3660msec 00:16:02.767 00:16:02.767 Disk stats (read/write): 00:16:02.767 nvme0n1: ios=469/0, merge=0/0, ticks=4382/0, in_queue=4382, util=98.88% 00:16:02.767 nvme0n2: ios=3186/0, merge=0/0, ticks=3456/0, in_queue=3456, util=96.11% 00:16:02.767 nvme0n3: ios=76/0, merge=0/0, ticks=3077/0, in_queue=3077, util=96.72% 00:16:02.767 nvme0n4: ios=7469/0, merge=0/0, ticks=2656/0, in_queue=2656, util=96.74% 00:16:03.026 06:11:09 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:03.026 06:11:09 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:03.293 06:11:09 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:03.293 06:11:09 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:03.550 06:11:09 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:03.550 06:11:09 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:03.808 06:11:10 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:03.808 06:11:10 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:04.066 06:11:10 -- target/fio.sh@69 -- # fio_status=0 00:16:04.066 06:11:10 -- target/fio.sh@70 -- # wait 1121875 00:16:04.066 06:11:10 -- target/fio.sh@70 -- # fio_status=4 00:16:04.066 06:11:10 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:04.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.066 06:11:10 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:04.322 06:11:10 -- common/autotest_common.sh@1198 -- # local i=0 00:16:04.322 06:11:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:04.322 06:11:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:04.322 06:11:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:04.322 06:11:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:04.322 06:11:10 -- common/autotest_common.sh@1210 -- # return 0 00:16:04.322 06:11:10 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:04.322 06:11:10 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:04.322 nvmf hotplug test: fio failed as expected 00:16:04.322 06:11:10 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:04.322 06:11:10 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:04.580 06:11:10 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:04.580 06:11:10 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:04.580 06:11:10 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:04.580 06:11:10 -- target/fio.sh@91 -- # nvmftestfini 00:16:04.580 06:11:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:04.580 06:11:10 -- nvmf/common.sh@116 -- # sync 00:16:04.580 06:11:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:04.580 06:11:10 -- nvmf/common.sh@119 -- # set +e 00:16:04.580 06:11:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:04.580 06:11:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:04.580 rmmod nvme_tcp 00:16:04.580 rmmod nvme_fabrics 00:16:04.580 rmmod nvme_keyring 00:16:04.580 06:11:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:04.580 06:11:10 -- nvmf/common.sh@123 -- # set -e 00:16:04.580 06:11:10 -- nvmf/common.sh@124 -- # return 0 00:16:04.580 06:11:10 -- nvmf/common.sh@477 -- # '[' -n 1119792 ']' 00:16:04.580 06:11:10 -- nvmf/common.sh@478 -- # killprocess 1119792 00:16:04.580 06:11:10 -- common/autotest_common.sh@926 -- # '[' -z 1119792 ']' 00:16:04.580 06:11:10 -- common/autotest_common.sh@930 -- # kill -0 1119792 00:16:04.580 06:11:10 -- common/autotest_common.sh@931 -- # uname 00:16:04.580 06:11:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:04.580 06:11:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1119792 00:16:04.580 06:11:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:04.580 06:11:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:04.580 06:11:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1119792' 00:16:04.580 killing process with pid 1119792 00:16:04.580 06:11:10 -- common/autotest_common.sh@945 -- # kill 1119792 00:16:04.580 06:11:10 -- common/autotest_common.sh@950 -- # wait 1119792 00:16:04.837 06:11:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:04.837 06:11:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:04.837 06:11:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:04.837 06:11:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:04.837 06:11:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:04.837 06:11:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.837 06:11:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.837 06:11:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.737 06:11:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:06.737 00:16:06.737 real 0m23.866s 00:16:06.737 user 1m23.544s 00:16:06.737 sys 0m6.441s 00:16:06.737 06:11:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:06.737 06:11:13 -- common/autotest_common.sh@10 -- # set +x 00:16:06.737 ************************************ 00:16:06.737 END TEST nvmf_fio_target 00:16:06.737 ************************************ 00:16:06.994 06:11:13 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:06.994 06:11:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:06.994 06:11:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:06.994 06:11:13 -- common/autotest_common.sh@10 -- # set +x 00:16:06.994 ************************************ 00:16:06.994 START TEST nvmf_bdevio 00:16:06.994 ************************************ 00:16:06.994 06:11:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:06.994 * Looking for test storage... 00:16:06.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:06.994 06:11:13 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.994 06:11:13 -- nvmf/common.sh@7 -- # uname -s 00:16:06.994 06:11:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.994 06:11:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.994 06:11:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.994 06:11:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.994 06:11:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.994 06:11:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.994 06:11:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.994 06:11:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.994 06:11:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.994 06:11:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.994 06:11:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.994 06:11:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:06.994 06:11:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.994 06:11:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.994 06:11:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.994 06:11:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:06.994 06:11:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.994 06:11:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.994 06:11:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.994 06:11:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.994 06:11:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.994 06:11:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.994 06:11:13 -- paths/export.sh@5 -- # export PATH 00:16:06.994 06:11:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.994 06:11:13 -- nvmf/common.sh@46 -- # : 0 00:16:06.994 06:11:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:06.994 06:11:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:06.994 06:11:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:06.994 06:11:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.994 06:11:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.994 06:11:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:06.994 06:11:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:06.994 06:11:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:06.994 06:11:13 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:06.994 06:11:13 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:06.994 06:11:13 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:06.994 06:11:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:06.994 06:11:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.994 06:11:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:06.994 06:11:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:06.994 06:11:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:06.994 06:11:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.994 06:11:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:06.994 06:11:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.994 06:11:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:06.994 06:11:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:06.994 06:11:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:06.994 06:11:13 -- common/autotest_common.sh@10 -- # set +x 00:16:08.894 06:11:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:08.894 06:11:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:08.894 06:11:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:08.894 06:11:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:08.894 06:11:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:08.894 06:11:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:08.894 06:11:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:08.894 06:11:15 -- nvmf/common.sh@294 -- # net_devs=() 00:16:08.894 06:11:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:08.894 06:11:15 -- nvmf/common.sh@295 -- # e810=() 00:16:08.894 06:11:15 -- nvmf/common.sh@295 -- # local -ga e810 00:16:08.894 06:11:15 -- nvmf/common.sh@296 -- # x722=() 00:16:08.894 06:11:15 -- nvmf/common.sh@296 -- # local -ga x722 00:16:08.894 06:11:15 -- nvmf/common.sh@297 -- # mlx=() 00:16:08.894 06:11:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:08.894 06:11:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:08.894 06:11:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:08.894 06:11:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:08.894 06:11:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:08.894 06:11:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:08.894 06:11:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:08.894 06:11:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:08.894 06:11:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:08.894 06:11:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:08.894 06:11:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:08.894 06:11:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:08.894 06:11:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:08.894 06:11:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:08.894 06:11:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:08.894 06:11:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:08.894 06:11:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:08.894 06:11:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:08.894 06:11:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:08.894 06:11:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:08.894 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:08.894 06:11:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:08.894 06:11:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:08.894 06:11:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.894 06:11:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.894 06:11:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:08.894 06:11:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:08.894 06:11:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:08.894 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:08.894 06:11:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:08.894 06:11:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:08.894 06:11:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.894 06:11:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.894 06:11:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:08.894 06:11:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:08.894 06:11:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:08.894 06:11:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:08.894 06:11:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:08.894 06:11:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.894 06:11:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:08.894 06:11:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.894 06:11:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:08.894 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:08.894 06:11:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.894 06:11:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:08.894 06:11:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.894 06:11:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:08.894 06:11:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.894 06:11:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:08.894 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:08.894 06:11:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.894 06:11:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:08.894 06:11:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:08.894 06:11:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:08.894 06:11:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:08.894 06:11:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:08.894 06:11:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.894 06:11:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:08.894 06:11:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:08.894 06:11:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:08.894 06:11:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:08.894 06:11:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:08.894 06:11:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:08.894 06:11:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:08.894 06:11:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.894 06:11:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:08.894 06:11:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:08.894 06:11:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:08.894 06:11:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:08.894 06:11:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:08.894 06:11:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:08.894 06:11:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:08.894 06:11:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:08.894 06:11:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:08.894 06:11:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:09.153 06:11:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:09.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:16:09.153 00:16:09.153 --- 10.0.0.2 ping statistics --- 00:16:09.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.153 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:16:09.153 06:11:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:09.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:16:09.153 00:16:09.153 --- 10.0.0.1 ping statistics --- 00:16:09.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.153 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:16:09.153 06:11:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.153 06:11:15 -- nvmf/common.sh@410 -- # return 0 00:16:09.153 06:11:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:09.153 06:11:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.153 06:11:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:09.153 06:11:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:09.153 06:11:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.153 06:11:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:09.153 06:11:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:09.153 06:11:15 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:09.153 06:11:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:09.153 06:11:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:09.153 06:11:15 -- common/autotest_common.sh@10 -- # set +x 00:16:09.153 06:11:15 -- nvmf/common.sh@469 -- # nvmfpid=1124620 00:16:09.153 06:11:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:09.153 06:11:15 -- nvmf/common.sh@470 -- # waitforlisten 1124620 00:16:09.153 06:11:15 -- common/autotest_common.sh@819 -- # '[' -z 1124620 ']' 00:16:09.153 06:11:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.153 06:11:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:09.153 06:11:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.153 06:11:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:09.153 06:11:15 -- common/autotest_common.sh@10 -- # set +x 00:16:09.153 [2024-07-13 06:11:15.490219] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:09.153 [2024-07-13 06:11:15.490315] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.153 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.153 [2024-07-13 06:11:15.560907] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:09.411 [2024-07-13 06:11:15.683419] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:09.412 [2024-07-13 06:11:15.683587] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.412 [2024-07-13 06:11:15.683608] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.412 [2024-07-13 06:11:15.683622] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.412 [2024-07-13 06:11:15.683708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:09.412 [2024-07-13 06:11:15.683768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:09.412 [2024-07-13 06:11:15.683818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:09.412 [2024-07-13 06:11:15.683821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:09.977 06:11:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:09.977 06:11:16 -- common/autotest_common.sh@852 -- # return 0 00:16:09.977 06:11:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:09.977 06:11:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:09.977 06:11:16 -- common/autotest_common.sh@10 -- # set +x 00:16:09.977 06:11:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.977 06:11:16 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:09.977 06:11:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.977 06:11:16 -- common/autotest_common.sh@10 -- # set +x 00:16:09.977 [2024-07-13 06:11:16.466325] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.977 06:11:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:09.977 06:11:16 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:09.978 06:11:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:09.978 06:11:16 -- common/autotest_common.sh@10 -- # set +x 00:16:10.234 Malloc0 00:16:10.234 06:11:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.234 06:11:16 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:10.234 06:11:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.234 06:11:16 -- common/autotest_common.sh@10 -- # set +x 00:16:10.234 06:11:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.234 06:11:16 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:10.234 06:11:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.234 06:11:16 -- common/autotest_common.sh@10 -- # set +x 00:16:10.234 06:11:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.234 06:11:16 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:10.234 06:11:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:10.234 06:11:16 -- common/autotest_common.sh@10 -- # set +x 00:16:10.234 [2024-07-13 06:11:16.517831] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.234 06:11:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:10.234 06:11:16 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:10.234 06:11:16 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:10.234 06:11:16 -- nvmf/common.sh@520 -- # config=() 00:16:10.234 06:11:16 -- nvmf/common.sh@520 -- # local subsystem config 00:16:10.234 06:11:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:10.234 06:11:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:10.234 { 00:16:10.234 "params": { 00:16:10.234 "name": "Nvme$subsystem", 00:16:10.234 "trtype": "$TEST_TRANSPORT", 00:16:10.234 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:10.234 "adrfam": "ipv4", 00:16:10.234 "trsvcid": "$NVMF_PORT", 00:16:10.234 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:10.234 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:10.234 "hdgst": ${hdgst:-false}, 00:16:10.234 "ddgst": ${ddgst:-false} 00:16:10.234 }, 00:16:10.234 "method": "bdev_nvme_attach_controller" 00:16:10.234 } 00:16:10.234 EOF 00:16:10.234 )") 00:16:10.234 06:11:16 -- nvmf/common.sh@542 -- # cat 00:16:10.234 06:11:16 -- nvmf/common.sh@544 -- # jq . 00:16:10.234 06:11:16 -- nvmf/common.sh@545 -- # IFS=, 00:16:10.234 06:11:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:10.234 "params": { 00:16:10.234 "name": "Nvme1", 00:16:10.234 "trtype": "tcp", 00:16:10.234 "traddr": "10.0.0.2", 00:16:10.235 "adrfam": "ipv4", 00:16:10.235 "trsvcid": "4420", 00:16:10.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:10.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:10.235 "hdgst": false, 00:16:10.235 "ddgst": false 00:16:10.235 }, 00:16:10.235 "method": "bdev_nvme_attach_controller" 00:16:10.235 }' 00:16:10.235 [2024-07-13 06:11:16.555832] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:10.235 [2024-07-13 06:11:16.555950] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1124780 ] 00:16:10.235 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.235 [2024-07-13 06:11:16.617019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:10.235 [2024-07-13 06:11:16.727833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.235 [2024-07-13 06:11:16.727916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.235 [2024-07-13 06:11:16.727922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.491 [2024-07-13 06:11:16.898928] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:10.491 [2024-07-13 06:11:16.898988] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:10.492 I/O targets: 00:16:10.492 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:10.492 00:16:10.492 00:16:10.492 CUnit - A unit testing framework for C - Version 2.1-3 00:16:10.492 http://cunit.sourceforge.net/ 00:16:10.492 00:16:10.492 00:16:10.492 Suite: bdevio tests on: Nvme1n1 00:16:10.492 Test: blockdev write read block ...passed 00:16:10.492 Test: blockdev write zeroes read block ...passed 00:16:10.492 Test: blockdev write zeroes read no split ...passed 00:16:10.749 Test: blockdev write zeroes read split ...passed 00:16:10.749 Test: blockdev write zeroes read split partial ...passed 00:16:10.749 Test: blockdev reset ...[2024-07-13 06:11:17.101778] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:10.749 [2024-07-13 06:11:17.101891] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1454180 (9): Bad file descriptor 00:16:10.749 [2024-07-13 06:11:17.210087] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:10.749 passed 00:16:10.749 Test: blockdev write read 8 blocks ...passed 00:16:10.749 Test: blockdev write read size > 128k ...passed 00:16:10.749 Test: blockdev write read invalid size ...passed 00:16:10.749 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:10.749 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:10.749 Test: blockdev write read max offset ...passed 00:16:11.007 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:11.007 Test: blockdev writev readv 8 blocks ...passed 00:16:11.007 Test: blockdev writev readv 30 x 1block ...passed 00:16:11.007 Test: blockdev writev readv block ...passed 00:16:11.007 Test: blockdev writev readv size > 128k ...passed 00:16:11.007 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:11.007 Test: blockdev comparev and writev ...[2024-07-13 06:11:17.423847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:11.007 [2024-07-13 06:11:17.423891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:11.007 [2024-07-13 06:11:17.423916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:11.007 [2024-07-13 06:11:17.423934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:11.007 [2024-07-13 06:11:17.424322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:11.007 [2024-07-13 06:11:17.424348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:11.007 [2024-07-13 06:11:17.424370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:11.007 [2024-07-13 06:11:17.424386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:11.007 [2024-07-13 06:11:17.424760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:11.007 [2024-07-13 06:11:17.424794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:11.007 [2024-07-13 06:11:17.424823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:11.007 [2024-07-13 06:11:17.424840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:11.007 [2024-07-13 06:11:17.425221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:11.007 [2024-07-13 06:11:17.425245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:11.007 [2024-07-13 06:11:17.425266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:11.007 [2024-07-13 06:11:17.425282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:11.007 passed 00:16:11.007 Test: blockdev nvme passthru rw ...passed 00:16:11.007 Test: blockdev nvme passthru vendor specific ...[2024-07-13 06:11:17.507197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:11.007 [2024-07-13 06:11:17.507224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:11.007 [2024-07-13 06:11:17.507416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:11.007 [2024-07-13 06:11:17.507438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:11.007 [2024-07-13 06:11:17.507628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:11.007 [2024-07-13 06:11:17.507651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:11.007 [2024-07-13 06:11:17.507836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:11.007 [2024-07-13 06:11:17.507860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:11.007 passed 00:16:11.266 Test: blockdev nvme admin passthru ...passed 00:16:11.266 Test: blockdev copy ...passed 00:16:11.266 00:16:11.266 Run Summary: Type Total Ran Passed Failed Inactive 00:16:11.266 suites 1 1 n/a 0 0 00:16:11.266 tests 23 23 23 0 0 00:16:11.266 asserts 152 152 152 0 n/a 00:16:11.266 00:16:11.266 Elapsed time = 1.328 seconds 00:16:11.524 06:11:17 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:11.524 06:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.524 06:11:17 -- common/autotest_common.sh@10 -- # set +x 00:16:11.524 06:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.524 06:11:17 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:11.524 06:11:17 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:11.524 06:11:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:11.524 06:11:17 -- nvmf/common.sh@116 -- # sync 00:16:11.524 06:11:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:11.524 06:11:17 -- nvmf/common.sh@119 -- # set +e 00:16:11.524 06:11:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:11.524 06:11:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:11.524 rmmod nvme_tcp 00:16:11.524 rmmod nvme_fabrics 00:16:11.524 rmmod nvme_keyring 00:16:11.524 06:11:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:11.524 06:11:17 -- nvmf/common.sh@123 -- # set -e 00:16:11.524 06:11:17 -- nvmf/common.sh@124 -- # return 0 00:16:11.524 06:11:17 -- nvmf/common.sh@477 -- # '[' -n 1124620 ']' 00:16:11.524 06:11:17 -- nvmf/common.sh@478 -- # killprocess 1124620 00:16:11.524 06:11:17 -- common/autotest_common.sh@926 -- # '[' -z 1124620 ']' 00:16:11.524 06:11:17 -- common/autotest_common.sh@930 -- # kill -0 1124620 00:16:11.524 06:11:17 -- common/autotest_common.sh@931 -- # uname 00:16:11.524 06:11:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:11.524 06:11:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1124620 00:16:11.524 06:11:17 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:11.524 06:11:17 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:11.524 06:11:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1124620' 00:16:11.524 killing process with pid 1124620 00:16:11.524 06:11:17 -- common/autotest_common.sh@945 -- # kill 1124620 00:16:11.524 06:11:17 -- common/autotest_common.sh@950 -- # wait 1124620 00:16:11.782 06:11:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:11.782 06:11:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:11.782 06:11:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:11.782 06:11:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:11.782 06:11:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:11.782 06:11:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.782 06:11:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:11.782 06:11:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.314 06:11:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:14.314 00:16:14.314 real 0m6.955s 00:16:14.314 user 0m12.945s 00:16:14.314 sys 0m2.090s 00:16:14.314 06:11:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.314 06:11:20 -- common/autotest_common.sh@10 -- # set +x 00:16:14.314 ************************************ 00:16:14.314 END TEST nvmf_bdevio 00:16:14.314 ************************************ 00:16:14.314 06:11:20 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:14.314 06:11:20 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:14.314 06:11:20 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:14.314 06:11:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:14.314 06:11:20 -- common/autotest_common.sh@10 -- # set +x 00:16:14.314 ************************************ 00:16:14.314 START TEST nvmf_bdevio_no_huge 00:16:14.314 ************************************ 00:16:14.314 06:11:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:14.314 * Looking for test storage... 00:16:14.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:14.314 06:11:20 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.314 06:11:20 -- nvmf/common.sh@7 -- # uname -s 00:16:14.314 06:11:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.314 06:11:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.314 06:11:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.314 06:11:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.314 06:11:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.314 06:11:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.314 06:11:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.314 06:11:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.314 06:11:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.314 06:11:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.314 06:11:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.314 06:11:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.314 06:11:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.314 06:11:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.314 06:11:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.314 06:11:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:14.314 06:11:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.314 06:11:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.314 06:11:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.314 06:11:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.314 06:11:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.314 06:11:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.314 06:11:20 -- paths/export.sh@5 -- # export PATH 00:16:14.314 06:11:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.314 06:11:20 -- nvmf/common.sh@46 -- # : 0 00:16:14.314 06:11:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:14.314 06:11:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:14.314 06:11:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:14.314 06:11:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.314 06:11:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.314 06:11:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:14.314 06:11:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:14.314 06:11:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:14.314 06:11:20 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:14.314 06:11:20 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:14.314 06:11:20 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:14.314 06:11:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:14.314 06:11:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.314 06:11:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:14.314 06:11:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:14.314 06:11:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:14.314 06:11:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.314 06:11:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.314 06:11:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.314 06:11:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:14.314 06:11:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:14.314 06:11:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:14.314 06:11:20 -- common/autotest_common.sh@10 -- # set +x 00:16:16.217 06:11:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:16.217 06:11:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:16.217 06:11:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:16.217 06:11:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:16.217 06:11:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:16.217 06:11:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:16.217 06:11:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:16.217 06:11:22 -- nvmf/common.sh@294 -- # net_devs=() 00:16:16.217 06:11:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:16.217 06:11:22 -- nvmf/common.sh@295 -- # e810=() 00:16:16.217 06:11:22 -- nvmf/common.sh@295 -- # local -ga e810 00:16:16.217 06:11:22 -- nvmf/common.sh@296 -- # x722=() 00:16:16.217 06:11:22 -- nvmf/common.sh@296 -- # local -ga x722 00:16:16.217 06:11:22 -- nvmf/common.sh@297 -- # mlx=() 00:16:16.217 06:11:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:16.217 06:11:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.217 06:11:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.217 06:11:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.217 06:11:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.217 06:11:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.217 06:11:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.217 06:11:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.217 06:11:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.217 06:11:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.217 06:11:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.217 06:11:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.217 06:11:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:16.217 06:11:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:16.217 06:11:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:16.217 06:11:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:16.217 06:11:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:16.217 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:16.217 06:11:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:16.217 06:11:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:16.217 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:16.217 06:11:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:16.217 06:11:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:16.217 06:11:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.217 06:11:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:16.217 06:11:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.217 06:11:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:16.217 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:16.217 06:11:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.217 06:11:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:16.217 06:11:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.217 06:11:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:16.217 06:11:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.217 06:11:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:16.217 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:16.217 06:11:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.217 06:11:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:16.217 06:11:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:16.217 06:11:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:16.217 06:11:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.217 06:11:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.217 06:11:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:16.217 06:11:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:16.217 06:11:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:16.217 06:11:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:16.217 06:11:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:16.217 06:11:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:16.217 06:11:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.217 06:11:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:16.217 06:11:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:16.217 06:11:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:16.217 06:11:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:16.217 06:11:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:16.217 06:11:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:16.217 06:11:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:16.217 06:11:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:16.217 06:11:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:16.217 06:11:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:16.217 06:11:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:16.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:16:16.217 00:16:16.217 --- 10.0.0.2 ping statistics --- 00:16:16.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.217 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:16:16.217 06:11:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:16.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:16:16.217 00:16:16.217 --- 10.0.0.1 ping statistics --- 00:16:16.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.217 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:16:16.217 06:11:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.217 06:11:22 -- nvmf/common.sh@410 -- # return 0 00:16:16.217 06:11:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:16.217 06:11:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.217 06:11:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:16.217 06:11:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.217 06:11:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:16.217 06:11:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:16.217 06:11:22 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:16.217 06:11:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:16.217 06:11:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:16.217 06:11:22 -- common/autotest_common.sh@10 -- # set +x 00:16:16.217 06:11:22 -- nvmf/common.sh@469 -- # nvmfpid=1126951 00:16:16.217 06:11:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:16.217 06:11:22 -- nvmf/common.sh@470 -- # waitforlisten 1126951 00:16:16.217 06:11:22 -- common/autotest_common.sh@819 -- # '[' -z 1126951 ']' 00:16:16.217 06:11:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.217 06:11:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:16.217 06:11:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.217 06:11:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:16.218 06:11:22 -- common/autotest_common.sh@10 -- # set +x 00:16:16.218 [2024-07-13 06:11:22.456974] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:16.218 [2024-07-13 06:11:22.457055] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:16.218 [2024-07-13 06:11:22.531259] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.218 [2024-07-13 06:11:22.632489] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:16.218 [2024-07-13 06:11:22.632633] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.218 [2024-07-13 06:11:22.632651] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.218 [2024-07-13 06:11:22.632664] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.218 [2024-07-13 06:11:22.632755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:16.218 [2024-07-13 06:11:22.632819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:16.218 [2024-07-13 06:11:22.632894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:16.218 [2024-07-13 06:11:22.632898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.149 06:11:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:17.149 06:11:23 -- common/autotest_common.sh@852 -- # return 0 00:16:17.149 06:11:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:17.149 06:11:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:17.149 06:11:23 -- common/autotest_common.sh@10 -- # set +x 00:16:17.149 06:11:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.149 06:11:23 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:17.149 06:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.149 06:11:23 -- common/autotest_common.sh@10 -- # set +x 00:16:17.149 [2024-07-13 06:11:23.403590] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.149 06:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.149 06:11:23 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:17.149 06:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.149 06:11:23 -- common/autotest_common.sh@10 -- # set +x 00:16:17.149 Malloc0 00:16:17.149 06:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.149 06:11:23 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:17.149 06:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.149 06:11:23 -- common/autotest_common.sh@10 -- # set +x 00:16:17.149 06:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.149 06:11:23 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:17.149 06:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.149 06:11:23 -- common/autotest_common.sh@10 -- # set +x 00:16:17.149 06:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.149 06:11:23 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.149 06:11:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.149 06:11:23 -- common/autotest_common.sh@10 -- # set +x 00:16:17.149 [2024-07-13 06:11:23.441594] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.149 06:11:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.149 06:11:23 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:17.149 06:11:23 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:17.149 06:11:23 -- nvmf/common.sh@520 -- # config=() 00:16:17.149 06:11:23 -- nvmf/common.sh@520 -- # local subsystem config 00:16:17.149 06:11:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:17.149 06:11:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:17.149 { 00:16:17.149 "params": { 00:16:17.149 "name": "Nvme$subsystem", 00:16:17.149 "trtype": "$TEST_TRANSPORT", 00:16:17.149 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:17.149 "adrfam": "ipv4", 00:16:17.149 "trsvcid": "$NVMF_PORT", 00:16:17.149 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:17.149 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:17.149 "hdgst": ${hdgst:-false}, 00:16:17.149 "ddgst": ${ddgst:-false} 00:16:17.149 }, 00:16:17.149 "method": "bdev_nvme_attach_controller" 00:16:17.149 } 00:16:17.149 EOF 00:16:17.149 )") 00:16:17.149 06:11:23 -- nvmf/common.sh@542 -- # cat 00:16:17.149 06:11:23 -- nvmf/common.sh@544 -- # jq . 00:16:17.149 06:11:23 -- nvmf/common.sh@545 -- # IFS=, 00:16:17.149 06:11:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:17.149 "params": { 00:16:17.149 "name": "Nvme1", 00:16:17.149 "trtype": "tcp", 00:16:17.149 "traddr": "10.0.0.2", 00:16:17.149 "adrfam": "ipv4", 00:16:17.149 "trsvcid": "4420", 00:16:17.149 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.149 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:17.149 "hdgst": false, 00:16:17.149 "ddgst": false 00:16:17.149 }, 00:16:17.149 "method": "bdev_nvme_attach_controller" 00:16:17.149 }' 00:16:17.149 [2024-07-13 06:11:23.485231] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:17.149 [2024-07-13 06:11:23.485317] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1127025 ] 00:16:17.149 [2024-07-13 06:11:23.554569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:17.407 [2024-07-13 06:11:23.668624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.407 [2024-07-13 06:11:23.668674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.407 [2024-07-13 06:11:23.668677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.407 [2024-07-13 06:11:23.867103] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:17.407 [2024-07-13 06:11:23.867163] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:17.407 I/O targets: 00:16:17.407 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:17.407 00:16:17.407 00:16:17.407 CUnit - A unit testing framework for C - Version 2.1-3 00:16:17.407 http://cunit.sourceforge.net/ 00:16:17.407 00:16:17.407 00:16:17.407 Suite: bdevio tests on: Nvme1n1 00:16:17.407 Test: blockdev write read block ...passed 00:16:17.673 Test: blockdev write zeroes read block ...passed 00:16:17.673 Test: blockdev write zeroes read no split ...passed 00:16:17.673 Test: blockdev write zeroes read split ...passed 00:16:17.673 Test: blockdev write zeroes read split partial ...passed 00:16:17.673 Test: blockdev reset ...[2024-07-13 06:11:24.075322] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:17.673 [2024-07-13 06:11:24.075427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b4eb00 (9): Bad file descriptor 00:16:17.673 [2024-07-13 06:11:24.136281] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:17.673 passed 00:16:17.673 Test: blockdev write read 8 blocks ...passed 00:16:17.673 Test: blockdev write read size > 128k ...passed 00:16:17.673 Test: blockdev write read invalid size ...passed 00:16:17.949 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:17.949 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:17.949 Test: blockdev write read max offset ...passed 00:16:17.949 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:17.949 Test: blockdev writev readv 8 blocks ...passed 00:16:17.949 Test: blockdev writev readv 30 x 1block ...passed 00:16:17.949 Test: blockdev writev readv block ...passed 00:16:17.949 Test: blockdev writev readv size > 128k ...passed 00:16:17.949 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:17.949 Test: blockdev comparev and writev ...[2024-07-13 06:11:24.312841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:17.949 [2024-07-13 06:11:24.312895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:17.949 [2024-07-13 06:11:24.312921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:17.949 [2024-07-13 06:11:24.312938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:17.949 [2024-07-13 06:11:24.313310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:17.949 [2024-07-13 06:11:24.313347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:17.949 [2024-07-13 06:11:24.313370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:17.949 [2024-07-13 06:11:24.313394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:17.949 [2024-07-13 06:11:24.313759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:17.949 [2024-07-13 06:11:24.313793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:17.949 [2024-07-13 06:11:24.313815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:17.949 [2024-07-13 06:11:24.313839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:17.949 [2024-07-13 06:11:24.314204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:17.949 [2024-07-13 06:11:24.314229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:17.949 [2024-07-13 06:11:24.314250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:17.949 [2024-07-13 06:11:24.314265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:17.949 passed 00:16:17.949 Test: blockdev nvme passthru rw ...passed 00:16:17.949 Test: blockdev nvme passthru vendor specific ...[2024-07-13 06:11:24.398178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:17.949 [2024-07-13 06:11:24.398205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:17.949 [2024-07-13 06:11:24.398389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:17.949 [2024-07-13 06:11:24.398411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:17.949 [2024-07-13 06:11:24.398584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:17.949 [2024-07-13 06:11:24.398608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:17.949 [2024-07-13 06:11:24.398780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:17.949 [2024-07-13 06:11:24.398804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:17.949 passed 00:16:17.949 Test: blockdev nvme admin passthru ...passed 00:16:17.949 Test: blockdev copy ...passed 00:16:17.949 00:16:17.949 Run Summary: Type Total Ran Passed Failed Inactive 00:16:17.949 suites 1 1 n/a 0 0 00:16:17.949 tests 23 23 23 0 0 00:16:17.949 asserts 152 152 152 0 n/a 00:16:17.949 00:16:17.949 Elapsed time = 1.178 seconds 00:16:18.516 06:11:24 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.516 06:11:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.516 06:11:24 -- common/autotest_common.sh@10 -- # set +x 00:16:18.516 06:11:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.516 06:11:24 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:18.516 06:11:24 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:18.516 06:11:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:18.516 06:11:24 -- nvmf/common.sh@116 -- # sync 00:16:18.516 06:11:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:18.516 06:11:24 -- nvmf/common.sh@119 -- # set +e 00:16:18.516 06:11:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:18.516 06:11:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:18.516 rmmod nvme_tcp 00:16:18.516 rmmod nvme_fabrics 00:16:18.516 rmmod nvme_keyring 00:16:18.516 06:11:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:18.516 06:11:24 -- nvmf/common.sh@123 -- # set -e 00:16:18.516 06:11:24 -- nvmf/common.sh@124 -- # return 0 00:16:18.516 06:11:24 -- nvmf/common.sh@477 -- # '[' -n 1126951 ']' 00:16:18.516 06:11:24 -- nvmf/common.sh@478 -- # killprocess 1126951 00:16:18.516 06:11:24 -- common/autotest_common.sh@926 -- # '[' -z 1126951 ']' 00:16:18.516 06:11:24 -- common/autotest_common.sh@930 -- # kill -0 1126951 00:16:18.516 06:11:24 -- common/autotest_common.sh@931 -- # uname 00:16:18.516 06:11:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:18.516 06:11:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1126951 00:16:18.516 06:11:24 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:18.516 06:11:24 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:18.516 06:11:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1126951' 00:16:18.516 killing process with pid 1126951 00:16:18.516 06:11:24 -- common/autotest_common.sh@945 -- # kill 1126951 00:16:18.516 06:11:24 -- common/autotest_common.sh@950 -- # wait 1126951 00:16:19.084 06:11:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:19.084 06:11:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:19.084 06:11:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:19.084 06:11:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:19.084 06:11:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:19.084 06:11:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.084 06:11:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.084 06:11:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.989 06:11:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:20.989 00:16:20.989 real 0m7.142s 00:16:20.989 user 0m13.445s 00:16:20.989 sys 0m2.479s 00:16:20.989 06:11:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:20.989 06:11:27 -- common/autotest_common.sh@10 -- # set +x 00:16:20.989 ************************************ 00:16:20.989 END TEST nvmf_bdevio_no_huge 00:16:20.989 ************************************ 00:16:20.989 06:11:27 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:20.989 06:11:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:20.989 06:11:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:20.989 06:11:27 -- common/autotest_common.sh@10 -- # set +x 00:16:20.989 ************************************ 00:16:20.989 START TEST nvmf_tls 00:16:20.989 ************************************ 00:16:20.989 06:11:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:20.989 * Looking for test storage... 00:16:20.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:20.989 06:11:27 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:20.989 06:11:27 -- nvmf/common.sh@7 -- # uname -s 00:16:20.989 06:11:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.989 06:11:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.989 06:11:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.989 06:11:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.989 06:11:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.989 06:11:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.989 06:11:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.989 06:11:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.989 06:11:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.989 06:11:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.989 06:11:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.989 06:11:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.989 06:11:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.989 06:11:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.989 06:11:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:20.989 06:11:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:20.989 06:11:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.989 06:11:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.989 06:11:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.989 06:11:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.989 06:11:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.989 06:11:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.989 06:11:27 -- paths/export.sh@5 -- # export PATH 00:16:20.989 06:11:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.989 06:11:27 -- nvmf/common.sh@46 -- # : 0 00:16:20.989 06:11:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:20.989 06:11:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:20.989 06:11:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:20.989 06:11:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.989 06:11:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.989 06:11:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:20.989 06:11:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:20.989 06:11:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:20.989 06:11:27 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:20.989 06:11:27 -- target/tls.sh@71 -- # nvmftestinit 00:16:20.989 06:11:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:20.989 06:11:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.989 06:11:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:20.989 06:11:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:20.990 06:11:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:20.990 06:11:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.990 06:11:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.990 06:11:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.990 06:11:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:20.990 06:11:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:20.990 06:11:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:20.990 06:11:27 -- common/autotest_common.sh@10 -- # set +x 00:16:23.521 06:11:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:23.521 06:11:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:23.521 06:11:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:23.521 06:11:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:23.521 06:11:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:23.521 06:11:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:23.521 06:11:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:23.521 06:11:29 -- nvmf/common.sh@294 -- # net_devs=() 00:16:23.521 06:11:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:23.521 06:11:29 -- nvmf/common.sh@295 -- # e810=() 00:16:23.521 06:11:29 -- nvmf/common.sh@295 -- # local -ga e810 00:16:23.521 06:11:29 -- nvmf/common.sh@296 -- # x722=() 00:16:23.521 06:11:29 -- nvmf/common.sh@296 -- # local -ga x722 00:16:23.521 06:11:29 -- nvmf/common.sh@297 -- # mlx=() 00:16:23.521 06:11:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:23.521 06:11:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:23.521 06:11:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:23.521 06:11:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:23.521 06:11:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:23.521 06:11:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:23.521 06:11:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:23.521 06:11:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:23.521 06:11:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:23.521 06:11:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:23.521 06:11:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:23.521 06:11:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:23.521 06:11:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:23.521 06:11:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:23.521 06:11:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:23.521 06:11:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:23.521 06:11:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:23.521 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:23.521 06:11:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:23.521 06:11:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:23.521 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:23.521 06:11:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:23.521 06:11:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:23.521 06:11:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.521 06:11:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:23.521 06:11:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.521 06:11:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:23.521 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:23.521 06:11:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.521 06:11:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:23.521 06:11:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.521 06:11:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:23.521 06:11:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.521 06:11:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:23.521 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:23.521 06:11:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.521 06:11:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:23.521 06:11:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:23.521 06:11:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:23.521 06:11:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.521 06:11:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.521 06:11:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:23.521 06:11:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:23.521 06:11:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:23.521 06:11:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:23.521 06:11:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:23.521 06:11:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:23.521 06:11:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.521 06:11:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:23.521 06:11:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:23.521 06:11:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:23.521 06:11:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:23.521 06:11:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:23.521 06:11:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:23.521 06:11:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:23.521 06:11:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:23.521 06:11:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:23.521 06:11:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:23.521 06:11:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:23.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:16:23.521 00:16:23.521 --- 10.0.0.2 ping statistics --- 00:16:23.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.521 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:16:23.521 06:11:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:23.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:16:23.521 00:16:23.521 --- 10.0.0.1 ping statistics --- 00:16:23.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.521 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:16:23.521 06:11:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.521 06:11:29 -- nvmf/common.sh@410 -- # return 0 00:16:23.521 06:11:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:23.521 06:11:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.521 06:11:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:23.521 06:11:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.521 06:11:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:23.521 06:11:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:23.521 06:11:29 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:23.521 06:11:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:23.521 06:11:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:23.521 06:11:29 -- common/autotest_common.sh@10 -- # set +x 00:16:23.521 06:11:29 -- nvmf/common.sh@469 -- # nvmfpid=1129226 00:16:23.521 06:11:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:23.521 06:11:29 -- nvmf/common.sh@470 -- # waitforlisten 1129226 00:16:23.521 06:11:29 -- common/autotest_common.sh@819 -- # '[' -z 1129226 ']' 00:16:23.521 06:11:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.521 06:11:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:23.521 06:11:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.521 06:11:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:23.521 06:11:29 -- common/autotest_common.sh@10 -- # set +x 00:16:23.521 [2024-07-13 06:11:29.641579] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:23.522 [2024-07-13 06:11:29.641655] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.522 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.522 [2024-07-13 06:11:29.718576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.522 [2024-07-13 06:11:29.839051] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:23.522 [2024-07-13 06:11:29.839208] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.522 [2024-07-13 06:11:29.839225] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.522 [2024-07-13 06:11:29.839245] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.522 [2024-07-13 06:11:29.839282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.522 06:11:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:23.522 06:11:29 -- common/autotest_common.sh@852 -- # return 0 00:16:23.522 06:11:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:23.522 06:11:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:23.522 06:11:29 -- common/autotest_common.sh@10 -- # set +x 00:16:23.522 06:11:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.522 06:11:29 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:23.522 06:11:29 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:23.780 true 00:16:23.780 06:11:30 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:23.780 06:11:30 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:24.037 06:11:30 -- target/tls.sh@82 -- # version=0 00:16:24.038 06:11:30 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:24.038 06:11:30 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:24.296 06:11:30 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:24.296 06:11:30 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:24.554 06:11:30 -- target/tls.sh@90 -- # version=13 00:16:24.554 06:11:30 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:24.554 06:11:30 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:24.812 06:11:31 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:24.812 06:11:31 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:25.070 06:11:31 -- target/tls.sh@98 -- # version=7 00:16:25.070 06:11:31 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:25.070 06:11:31 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:25.070 06:11:31 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:25.328 06:11:31 -- target/tls.sh@105 -- # ktls=false 00:16:25.328 06:11:31 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:25.328 06:11:31 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:25.586 06:11:31 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:25.586 06:11:31 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:25.844 06:11:32 -- target/tls.sh@113 -- # ktls=true 00:16:25.844 06:11:32 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:25.844 06:11:32 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:26.102 06:11:32 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:26.102 06:11:32 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:26.359 06:11:32 -- target/tls.sh@121 -- # ktls=false 00:16:26.359 06:11:32 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:26.359 06:11:32 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:26.359 06:11:32 -- target/tls.sh@49 -- # local key hash crc 00:16:26.359 06:11:32 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:26.359 06:11:32 -- target/tls.sh@51 -- # hash=01 00:16:26.359 06:11:32 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:26.359 06:11:32 -- target/tls.sh@52 -- # gzip -1 -c 00:16:26.359 06:11:32 -- target/tls.sh@52 -- # tail -c8 00:16:26.359 06:11:32 -- target/tls.sh@52 -- # head -c 4 00:16:26.359 06:11:32 -- target/tls.sh@52 -- # crc='p$H�' 00:16:26.359 06:11:32 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:26.359 06:11:32 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:26.359 06:11:32 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:26.359 06:11:32 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:26.359 06:11:32 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:26.359 06:11:32 -- target/tls.sh@49 -- # local key hash crc 00:16:26.359 06:11:32 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:26.359 06:11:32 -- target/tls.sh@51 -- # hash=01 00:16:26.359 06:11:32 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:26.359 06:11:32 -- target/tls.sh@52 -- # gzip -1 -c 00:16:26.359 06:11:32 -- target/tls.sh@52 -- # tail -c8 00:16:26.359 06:11:32 -- target/tls.sh@52 -- # head -c 4 00:16:26.359 06:11:32 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:26.359 06:11:32 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:26.359 06:11:32 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:26.359 06:11:32 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:26.359 06:11:32 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:26.359 06:11:32 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:26.359 06:11:32 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:26.359 06:11:32 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:26.359 06:11:32 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:26.359 06:11:32 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:26.359 06:11:32 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:26.360 06:11:32 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:26.616 06:11:32 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:26.873 06:11:33 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:26.873 06:11:33 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:26.873 06:11:33 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:27.131 [2024-07-13 06:11:33.530756] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.131 06:11:33 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:27.389 06:11:33 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:27.646 [2024-07-13 06:11:33.991989] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:27.646 [2024-07-13 06:11:33.992246] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.646 06:11:34 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:27.904 malloc0 00:16:27.904 06:11:34 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:28.161 06:11:34 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:28.418 06:11:34 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:28.418 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.405 Initializing NVMe Controllers 00:16:38.405 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:38.405 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:38.405 Initialization complete. Launching workers. 00:16:38.405 ======================================================== 00:16:38.405 Latency(us) 00:16:38.405 Device Information : IOPS MiB/s Average min max 00:16:38.405 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7666.18 29.95 8351.02 1223.92 10976.96 00:16:38.405 ======================================================== 00:16:38.405 Total : 7666.18 29.95 8351.02 1223.92 10976.96 00:16:38.405 00:16:38.405 06:11:44 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:38.405 06:11:44 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:38.405 06:11:44 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:38.405 06:11:44 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:38.405 06:11:44 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:16:38.405 06:11:44 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:38.405 06:11:44 -- target/tls.sh@28 -- # bdevperf_pid=1131063 00:16:38.405 06:11:44 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:38.405 06:11:44 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:38.405 06:11:44 -- target/tls.sh@31 -- # waitforlisten 1131063 /var/tmp/bdevperf.sock 00:16:38.405 06:11:44 -- common/autotest_common.sh@819 -- # '[' -z 1131063 ']' 00:16:38.405 06:11:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:38.405 06:11:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:38.405 06:11:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:38.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:38.405 06:11:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:38.405 06:11:44 -- common/autotest_common.sh@10 -- # set +x 00:16:38.405 [2024-07-13 06:11:44.849194] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:38.405 [2024-07-13 06:11:44.849285] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1131063 ] 00:16:38.405 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.405 [2024-07-13 06:11:44.906390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:38.664 [2024-07-13 06:11:45.019655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:39.597 06:11:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:39.597 06:11:45 -- common/autotest_common.sh@852 -- # return 0 00:16:39.597 06:11:45 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:39.597 [2024-07-13 06:11:45.998158] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:39.597 TLSTESTn1 00:16:39.597 06:11:46 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:39.855 Running I/O for 10 seconds... 00:16:49.819 00:16:49.819 Latency(us) 00:16:49.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.819 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:49.819 Verification LBA range: start 0x0 length 0x2000 00:16:49.819 TLSTESTn1 : 10.02 3093.80 12.09 0.00 0.00 41318.26 7378.87 47962.64 00:16:49.819 =================================================================================================================== 00:16:49.819 Total : 3093.80 12.09 0.00 0.00 41318.26 7378.87 47962.64 00:16:49.819 0 00:16:49.819 06:11:56 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:49.819 06:11:56 -- target/tls.sh@45 -- # killprocess 1131063 00:16:49.819 06:11:56 -- common/autotest_common.sh@926 -- # '[' -z 1131063 ']' 00:16:49.819 06:11:56 -- common/autotest_common.sh@930 -- # kill -0 1131063 00:16:49.819 06:11:56 -- common/autotest_common.sh@931 -- # uname 00:16:49.819 06:11:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:49.819 06:11:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1131063 00:16:49.819 06:11:56 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:49.819 06:11:56 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:49.819 06:11:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1131063' 00:16:49.819 killing process with pid 1131063 00:16:49.819 06:11:56 -- common/autotest_common.sh@945 -- # kill 1131063 00:16:49.819 Received shutdown signal, test time was about 10.000000 seconds 00:16:49.819 00:16:49.819 Latency(us) 00:16:49.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.819 =================================================================================================================== 00:16:49.819 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:49.819 06:11:56 -- common/autotest_common.sh@950 -- # wait 1131063 00:16:50.077 06:11:56 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:50.077 06:11:56 -- common/autotest_common.sh@640 -- # local es=0 00:16:50.077 06:11:56 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:50.077 06:11:56 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:50.077 06:11:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:50.077 06:11:56 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:50.077 06:11:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:50.077 06:11:56 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:50.077 06:11:56 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:50.077 06:11:56 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:50.077 06:11:56 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:50.077 06:11:56 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:16:50.077 06:11:56 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:50.077 06:11:56 -- target/tls.sh@28 -- # bdevperf_pid=1132555 00:16:50.077 06:11:56 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:50.077 06:11:56 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:50.077 06:11:56 -- target/tls.sh@31 -- # waitforlisten 1132555 /var/tmp/bdevperf.sock 00:16:50.077 06:11:56 -- common/autotest_common.sh@819 -- # '[' -z 1132555 ']' 00:16:50.077 06:11:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:50.077 06:11:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:50.077 06:11:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:50.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:50.077 06:11:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:50.077 06:11:56 -- common/autotest_common.sh@10 -- # set +x 00:16:50.077 [2024-07-13 06:11:56.572042] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:50.077 [2024-07-13 06:11:56.572125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132555 ] 00:16:50.335 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.335 [2024-07-13 06:11:56.631553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.335 [2024-07-13 06:11:56.735289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.267 06:11:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:51.267 06:11:57 -- common/autotest_common.sh@852 -- # return 0 00:16:51.267 06:11:57 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:51.525 [2024-07-13 06:11:57.786485] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:51.525 [2024-07-13 06:11:57.794972] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:51.525 [2024-07-13 06:11:57.795466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1e870 (107): Transport endpoint is not connected 00:16:51.525 [2024-07-13 06:11:57.796456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf1e870 (9): Bad file descriptor 00:16:51.525 [2024-07-13 06:11:57.797456] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:51.525 [2024-07-13 06:11:57.797474] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:51.525 [2024-07-13 06:11:57.797513] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:51.525 request: 00:16:51.525 { 00:16:51.525 "name": "TLSTEST", 00:16:51.525 "trtype": "tcp", 00:16:51.525 "traddr": "10.0.0.2", 00:16:51.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:51.525 "adrfam": "ipv4", 00:16:51.525 "trsvcid": "4420", 00:16:51.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.525 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:16:51.525 "method": "bdev_nvme_attach_controller", 00:16:51.525 "req_id": 1 00:16:51.525 } 00:16:51.525 Got JSON-RPC error response 00:16:51.525 response: 00:16:51.525 { 00:16:51.525 "code": -32602, 00:16:51.525 "message": "Invalid parameters" 00:16:51.525 } 00:16:51.525 06:11:57 -- target/tls.sh@36 -- # killprocess 1132555 00:16:51.525 06:11:57 -- common/autotest_common.sh@926 -- # '[' -z 1132555 ']' 00:16:51.525 06:11:57 -- common/autotest_common.sh@930 -- # kill -0 1132555 00:16:51.525 06:11:57 -- common/autotest_common.sh@931 -- # uname 00:16:51.525 06:11:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:51.525 06:11:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1132555 00:16:51.525 06:11:57 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:51.525 06:11:57 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:51.525 06:11:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1132555' 00:16:51.526 killing process with pid 1132555 00:16:51.526 06:11:57 -- common/autotest_common.sh@945 -- # kill 1132555 00:16:51.526 Received shutdown signal, test time was about 10.000000 seconds 00:16:51.526 00:16:51.526 Latency(us) 00:16:51.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.526 =================================================================================================================== 00:16:51.526 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:51.526 06:11:57 -- common/autotest_common.sh@950 -- # wait 1132555 00:16:51.784 06:11:58 -- target/tls.sh@37 -- # return 1 00:16:51.784 06:11:58 -- common/autotest_common.sh@643 -- # es=1 00:16:51.784 06:11:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:51.784 06:11:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:51.784 06:11:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:51.784 06:11:58 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:51.784 06:11:58 -- common/autotest_common.sh@640 -- # local es=0 00:16:51.784 06:11:58 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:51.784 06:11:58 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:51.784 06:11:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:51.784 06:11:58 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:51.784 06:11:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:51.784 06:11:58 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:51.784 06:11:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:51.784 06:11:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:51.784 06:11:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:16:51.784 06:11:58 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:16:51.784 06:11:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:51.784 06:11:58 -- target/tls.sh@28 -- # bdevperf_pid=1132706 00:16:51.784 06:11:58 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:51.784 06:11:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:51.784 06:11:58 -- target/tls.sh@31 -- # waitforlisten 1132706 /var/tmp/bdevperf.sock 00:16:51.784 06:11:58 -- common/autotest_common.sh@819 -- # '[' -z 1132706 ']' 00:16:51.784 06:11:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.784 06:11:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:51.784 06:11:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.784 06:11:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:51.784 06:11:58 -- common/autotest_common.sh@10 -- # set +x 00:16:51.784 [2024-07-13 06:11:58.105981] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:51.784 [2024-07-13 06:11:58.106059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132706 ] 00:16:51.784 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.784 [2024-07-13 06:11:58.164614] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.784 [2024-07-13 06:11:58.270119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.719 06:11:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:52.719 06:11:59 -- common/autotest_common.sh@852 -- # return 0 00:16:52.719 06:11:59 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:52.977 [2024-07-13 06:11:59.270044] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:52.977 [2024-07-13 06:11:59.278420] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:52.977 [2024-07-13 06:11:59.278452] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:16:52.977 [2024-07-13 06:11:59.278506] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:52.977 [2024-07-13 06:11:59.279023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a2870 (107): Transport endpoint is not connected 00:16:52.977 [2024-07-13 06:11:59.280013] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a2870 (9): Bad file descriptor 00:16:52.977 [2024-07-13 06:11:59.281011] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:52.977 [2024-07-13 06:11:59.281030] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:52.977 [2024-07-13 06:11:59.281043] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:52.977 request: 00:16:52.977 { 00:16:52.977 "name": "TLSTEST", 00:16:52.977 "trtype": "tcp", 00:16:52.977 "traddr": "10.0.0.2", 00:16:52.977 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:52.977 "adrfam": "ipv4", 00:16:52.977 "trsvcid": "4420", 00:16:52.977 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:52.977 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:16:52.977 "method": "bdev_nvme_attach_controller", 00:16:52.977 "req_id": 1 00:16:52.977 } 00:16:52.977 Got JSON-RPC error response 00:16:52.977 response: 00:16:52.977 { 00:16:52.977 "code": -32602, 00:16:52.977 "message": "Invalid parameters" 00:16:52.977 } 00:16:52.977 06:11:59 -- target/tls.sh@36 -- # killprocess 1132706 00:16:52.977 06:11:59 -- common/autotest_common.sh@926 -- # '[' -z 1132706 ']' 00:16:52.977 06:11:59 -- common/autotest_common.sh@930 -- # kill -0 1132706 00:16:52.977 06:11:59 -- common/autotest_common.sh@931 -- # uname 00:16:52.977 06:11:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:52.977 06:11:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1132706 00:16:52.977 06:11:59 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:52.977 06:11:59 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:52.977 06:11:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1132706' 00:16:52.977 killing process with pid 1132706 00:16:52.977 06:11:59 -- common/autotest_common.sh@945 -- # kill 1132706 00:16:52.977 Received shutdown signal, test time was about 10.000000 seconds 00:16:52.977 00:16:52.977 Latency(us) 00:16:52.978 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.978 =================================================================================================================== 00:16:52.978 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:52.978 06:11:59 -- common/autotest_common.sh@950 -- # wait 1132706 00:16:53.236 06:11:59 -- target/tls.sh@37 -- # return 1 00:16:53.236 06:11:59 -- common/autotest_common.sh@643 -- # es=1 00:16:53.236 06:11:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:53.236 06:11:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:53.236 06:11:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:53.236 06:11:59 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:53.236 06:11:59 -- common/autotest_common.sh@640 -- # local es=0 00:16:53.236 06:11:59 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:53.236 06:11:59 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:53.236 06:11:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:53.236 06:11:59 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:53.236 06:11:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:53.236 06:11:59 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:53.236 06:11:59 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:53.236 06:11:59 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:16:53.236 06:11:59 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:53.236 06:11:59 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:16:53.236 06:11:59 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:53.236 06:11:59 -- target/tls.sh@28 -- # bdevperf_pid=1132858 00:16:53.236 06:11:59 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:53.236 06:11:59 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:53.236 06:11:59 -- target/tls.sh@31 -- # waitforlisten 1132858 /var/tmp/bdevperf.sock 00:16:53.236 06:11:59 -- common/autotest_common.sh@819 -- # '[' -z 1132858 ']' 00:16:53.236 06:11:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:53.236 06:11:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:53.236 06:11:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:53.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:53.236 06:11:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:53.236 06:11:59 -- common/autotest_common.sh@10 -- # set +x 00:16:53.236 [2024-07-13 06:11:59.590992] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:53.236 [2024-07-13 06:11:59.591071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132858 ] 00:16:53.236 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.236 [2024-07-13 06:11:59.653424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.495 [2024-07-13 06:11:59.763519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.060 06:12:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:54.060 06:12:00 -- common/autotest_common.sh@852 -- # return 0 00:16:54.060 06:12:00 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:54.319 [2024-07-13 06:12:00.789141] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:54.319 [2024-07-13 06:12:00.794771] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:54.319 [2024-07-13 06:12:00.794806] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:16:54.319 [2024-07-13 06:12:00.794856] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:54.319 [2024-07-13 06:12:00.795339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1799870 (107): Transport endpoint is not connected 00:16:54.319 [2024-07-13 06:12:00.796318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1799870 (9): Bad file descriptor 00:16:54.319 [2024-07-13 06:12:00.797317] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:16:54.319 [2024-07-13 06:12:00.797337] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:54.319 [2024-07-13 06:12:00.797365] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:16:54.319 request: 00:16:54.319 { 00:16:54.319 "name": "TLSTEST", 00:16:54.319 "trtype": "tcp", 00:16:54.319 "traddr": "10.0.0.2", 00:16:54.319 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:54.319 "adrfam": "ipv4", 00:16:54.319 "trsvcid": "4420", 00:16:54.319 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:54.319 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:16:54.319 "method": "bdev_nvme_attach_controller", 00:16:54.319 "req_id": 1 00:16:54.319 } 00:16:54.319 Got JSON-RPC error response 00:16:54.319 response: 00:16:54.319 { 00:16:54.319 "code": -32602, 00:16:54.319 "message": "Invalid parameters" 00:16:54.319 } 00:16:54.319 06:12:00 -- target/tls.sh@36 -- # killprocess 1132858 00:16:54.319 06:12:00 -- common/autotest_common.sh@926 -- # '[' -z 1132858 ']' 00:16:54.319 06:12:00 -- common/autotest_common.sh@930 -- # kill -0 1132858 00:16:54.319 06:12:00 -- common/autotest_common.sh@931 -- # uname 00:16:54.319 06:12:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:54.319 06:12:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1132858 00:16:54.578 06:12:00 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:54.578 06:12:00 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:54.578 06:12:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1132858' 00:16:54.578 killing process with pid 1132858 00:16:54.578 06:12:00 -- common/autotest_common.sh@945 -- # kill 1132858 00:16:54.578 Received shutdown signal, test time was about 10.000000 seconds 00:16:54.578 00:16:54.578 Latency(us) 00:16:54.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.578 =================================================================================================================== 00:16:54.578 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:54.578 06:12:00 -- common/autotest_common.sh@950 -- # wait 1132858 00:16:54.578 06:12:01 -- target/tls.sh@37 -- # return 1 00:16:54.578 06:12:01 -- common/autotest_common.sh@643 -- # es=1 00:16:54.578 06:12:01 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:54.578 06:12:01 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:54.578 06:12:01 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:54.578 06:12:01 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:54.578 06:12:01 -- common/autotest_common.sh@640 -- # local es=0 00:16:54.578 06:12:01 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:54.578 06:12:01 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:54.578 06:12:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:54.578 06:12:01 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:54.578 06:12:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:54.578 06:12:01 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:16:54.578 06:12:01 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:54.578 06:12:01 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:54.578 06:12:01 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:54.578 06:12:01 -- target/tls.sh@23 -- # psk= 00:16:54.578 06:12:01 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:54.578 06:12:01 -- target/tls.sh@28 -- # bdevperf_pid=1133194 00:16:54.578 06:12:01 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:54.578 06:12:01 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:54.578 06:12:01 -- target/tls.sh@31 -- # waitforlisten 1133194 /var/tmp/bdevperf.sock 00:16:54.578 06:12:01 -- common/autotest_common.sh@819 -- # '[' -z 1133194 ']' 00:16:54.578 06:12:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:54.578 06:12:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:54.578 06:12:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:54.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:54.578 06:12:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:54.578 06:12:01 -- common/autotest_common.sh@10 -- # set +x 00:16:54.837 [2024-07-13 06:12:01.122959] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:54.837 [2024-07-13 06:12:01.123037] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133194 ] 00:16:54.837 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.837 [2024-07-13 06:12:01.180351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.837 [2024-07-13 06:12:01.295984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.770 06:12:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:55.770 06:12:02 -- common/autotest_common.sh@852 -- # return 0 00:16:55.770 06:12:02 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:16:56.039 [2024-07-13 06:12:02.316449] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:56.039 [2024-07-13 06:12:02.317880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219f330 (9): Bad file descriptor 00:16:56.039 [2024-07-13 06:12:02.318863] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:56.039 [2024-07-13 06:12:02.318891] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:56.039 [2024-07-13 06:12:02.318906] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:56.039 request: 00:16:56.039 { 00:16:56.039 "name": "TLSTEST", 00:16:56.039 "trtype": "tcp", 00:16:56.039 "traddr": "10.0.0.2", 00:16:56.039 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:56.039 "adrfam": "ipv4", 00:16:56.039 "trsvcid": "4420", 00:16:56.039 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:56.039 "method": "bdev_nvme_attach_controller", 00:16:56.039 "req_id": 1 00:16:56.039 } 00:16:56.039 Got JSON-RPC error response 00:16:56.039 response: 00:16:56.039 { 00:16:56.039 "code": -32602, 00:16:56.039 "message": "Invalid parameters" 00:16:56.039 } 00:16:56.039 06:12:02 -- target/tls.sh@36 -- # killprocess 1133194 00:16:56.039 06:12:02 -- common/autotest_common.sh@926 -- # '[' -z 1133194 ']' 00:16:56.039 06:12:02 -- common/autotest_common.sh@930 -- # kill -0 1133194 00:16:56.039 06:12:02 -- common/autotest_common.sh@931 -- # uname 00:16:56.039 06:12:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:56.039 06:12:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1133194 00:16:56.039 06:12:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:56.039 06:12:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:56.039 06:12:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1133194' 00:16:56.039 killing process with pid 1133194 00:16:56.039 06:12:02 -- common/autotest_common.sh@945 -- # kill 1133194 00:16:56.039 Received shutdown signal, test time was about 10.000000 seconds 00:16:56.039 00:16:56.039 Latency(us) 00:16:56.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.039 =================================================================================================================== 00:16:56.039 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:56.039 06:12:02 -- common/autotest_common.sh@950 -- # wait 1133194 00:16:56.302 06:12:02 -- target/tls.sh@37 -- # return 1 00:16:56.302 06:12:02 -- common/autotest_common.sh@643 -- # es=1 00:16:56.302 06:12:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:56.302 06:12:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:56.302 06:12:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:56.302 06:12:02 -- target/tls.sh@167 -- # killprocess 1129226 00:16:56.302 06:12:02 -- common/autotest_common.sh@926 -- # '[' -z 1129226 ']' 00:16:56.302 06:12:02 -- common/autotest_common.sh@930 -- # kill -0 1129226 00:16:56.302 06:12:02 -- common/autotest_common.sh@931 -- # uname 00:16:56.302 06:12:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:56.302 06:12:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1129226 00:16:56.302 06:12:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:56.302 06:12:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:56.302 06:12:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1129226' 00:16:56.302 killing process with pid 1129226 00:16:56.302 06:12:02 -- common/autotest_common.sh@945 -- # kill 1129226 00:16:56.302 06:12:02 -- common/autotest_common.sh@950 -- # wait 1129226 00:16:56.560 06:12:02 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:16:56.560 06:12:02 -- target/tls.sh@49 -- # local key hash crc 00:16:56.560 06:12:02 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:16:56.560 06:12:02 -- target/tls.sh@51 -- # hash=02 00:16:56.560 06:12:02 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:16:56.560 06:12:02 -- target/tls.sh@52 -- # gzip -1 -c 00:16:56.560 06:12:02 -- target/tls.sh@52 -- # tail -c8 00:16:56.560 06:12:02 -- target/tls.sh@52 -- # head -c 4 00:16:56.560 06:12:02 -- target/tls.sh@52 -- # crc='�e�'\''' 00:16:56.560 06:12:02 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:56.561 06:12:02 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:16:56.561 06:12:02 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:56.561 06:12:02 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:56.561 06:12:02 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:16:56.561 06:12:02 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:16:56.561 06:12:02 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:16:56.561 06:12:02 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:16:56.561 06:12:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:56.561 06:12:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:56.561 06:12:02 -- common/autotest_common.sh@10 -- # set +x 00:16:56.561 06:12:02 -- nvmf/common.sh@469 -- # nvmfpid=1133415 00:16:56.561 06:12:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:56.561 06:12:02 -- nvmf/common.sh@470 -- # waitforlisten 1133415 00:16:56.561 06:12:02 -- common/autotest_common.sh@819 -- # '[' -z 1133415 ']' 00:16:56.561 06:12:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.561 06:12:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:56.561 06:12:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.561 06:12:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:56.561 06:12:02 -- common/autotest_common.sh@10 -- # set +x 00:16:56.561 [2024-07-13 06:12:02.965761] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:56.561 [2024-07-13 06:12:02.965848] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:56.561 EAL: No free 2048 kB hugepages reported on node 1 00:16:56.561 [2024-07-13 06:12:03.031153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.823 [2024-07-13 06:12:03.138274] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:56.823 [2024-07-13 06:12:03.138444] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:56.823 [2024-07-13 06:12:03.138462] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:56.823 [2024-07-13 06:12:03.138474] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:56.823 [2024-07-13 06:12:03.138511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.773 06:12:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:57.773 06:12:03 -- common/autotest_common.sh@852 -- # return 0 00:16:57.773 06:12:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:57.773 06:12:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:57.773 06:12:03 -- common/autotest_common.sh@10 -- # set +x 00:16:57.773 06:12:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:57.773 06:12:03 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:16:57.773 06:12:03 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:16:57.773 06:12:03 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:57.773 [2024-07-13 06:12:04.162326] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:57.773 06:12:04 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:58.030 06:12:04 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:58.287 [2024-07-13 06:12:04.639615] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:58.287 [2024-07-13 06:12:04.639843] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.287 06:12:04 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:58.544 malloc0 00:16:58.544 06:12:04 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:58.801 06:12:05 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:16:59.059 06:12:05 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:16:59.059 06:12:05 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:59.059 06:12:05 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:59.059 06:12:05 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:59.059 06:12:05 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:16:59.059 06:12:05 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:59.059 06:12:05 -- target/tls.sh@28 -- # bdevperf_pid=1133827 00:16:59.059 06:12:05 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:59.059 06:12:05 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:59.059 06:12:05 -- target/tls.sh@31 -- # waitforlisten 1133827 /var/tmp/bdevperf.sock 00:16:59.059 06:12:05 -- common/autotest_common.sh@819 -- # '[' -z 1133827 ']' 00:16:59.059 06:12:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:59.059 06:12:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:59.059 06:12:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:59.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:59.060 06:12:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:59.060 06:12:05 -- common/autotest_common.sh@10 -- # set +x 00:16:59.060 [2024-07-13 06:12:05.423599] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:59.060 [2024-07-13 06:12:05.423690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133827 ] 00:16:59.060 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.060 [2024-07-13 06:12:05.482714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.317 [2024-07-13 06:12:05.590798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.250 06:12:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:00.250 06:12:06 -- common/autotest_common.sh@852 -- # return 0 00:17:00.250 06:12:06 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:00.250 [2024-07-13 06:12:06.669972] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:00.250 TLSTESTn1 00:17:00.250 06:12:06 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:00.508 Running I/O for 10 seconds... 00:17:10.472 00:17:10.472 Latency(us) 00:17:10.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.472 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:10.472 Verification LBA range: start 0x0 length 0x2000 00:17:10.472 TLSTESTn1 : 10.02 3334.64 13.03 0.00 0.00 38339.77 4781.70 44273.21 00:17:10.472 =================================================================================================================== 00:17:10.472 Total : 3334.64 13.03 0.00 0.00 38339.77 4781.70 44273.21 00:17:10.472 0 00:17:10.472 06:12:16 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:10.472 06:12:16 -- target/tls.sh@45 -- # killprocess 1133827 00:17:10.472 06:12:16 -- common/autotest_common.sh@926 -- # '[' -z 1133827 ']' 00:17:10.472 06:12:16 -- common/autotest_common.sh@930 -- # kill -0 1133827 00:17:10.472 06:12:16 -- common/autotest_common.sh@931 -- # uname 00:17:10.472 06:12:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:10.472 06:12:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1133827 00:17:10.472 06:12:16 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:10.472 06:12:16 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:10.472 06:12:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1133827' 00:17:10.472 killing process with pid 1133827 00:17:10.472 06:12:16 -- common/autotest_common.sh@945 -- # kill 1133827 00:17:10.472 Received shutdown signal, test time was about 10.000000 seconds 00:17:10.472 00:17:10.472 Latency(us) 00:17:10.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:10.472 =================================================================================================================== 00:17:10.472 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:10.472 06:12:16 -- common/autotest_common.sh@950 -- # wait 1133827 00:17:10.729 06:12:17 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:10.729 06:12:17 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:10.729 06:12:17 -- common/autotest_common.sh@640 -- # local es=0 00:17:10.729 06:12:17 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:10.729 06:12:17 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:10.729 06:12:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:10.729 06:12:17 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:10.729 06:12:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:10.729 06:12:17 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:10.729 06:12:17 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:10.729 06:12:17 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:10.729 06:12:17 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:10.729 06:12:17 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:17:10.729 06:12:17 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:10.729 06:12:17 -- target/tls.sh@28 -- # bdevperf_pid=1135706 00:17:10.729 06:12:17 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:10.729 06:12:17 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:10.729 06:12:17 -- target/tls.sh@31 -- # waitforlisten 1135706 /var/tmp/bdevperf.sock 00:17:10.729 06:12:17 -- common/autotest_common.sh@819 -- # '[' -z 1135706 ']' 00:17:10.729 06:12:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:10.729 06:12:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:10.729 06:12:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:10.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:10.729 06:12:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:10.729 06:12:17 -- common/autotest_common.sh@10 -- # set +x 00:17:10.986 [2024-07-13 06:12:17.270813] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:10.986 [2024-07-13 06:12:17.270897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135706 ] 00:17:10.986 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.986 [2024-07-13 06:12:17.327694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.986 [2024-07-13 06:12:17.428000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:11.941 06:12:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:11.941 06:12:18 -- common/autotest_common.sh@852 -- # return 0 00:17:11.941 06:12:18 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:11.941 [2024-07-13 06:12:18.395808] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:11.941 [2024-07-13 06:12:18.395887] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:11.941 request: 00:17:11.941 { 00:17:11.941 "name": "TLSTEST", 00:17:11.941 "trtype": "tcp", 00:17:11.941 "traddr": "10.0.0.2", 00:17:11.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:11.941 "adrfam": "ipv4", 00:17:11.941 "trsvcid": "4420", 00:17:11.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.941 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:11.941 "method": "bdev_nvme_attach_controller", 00:17:11.941 "req_id": 1 00:17:11.941 } 00:17:11.941 Got JSON-RPC error response 00:17:11.941 response: 00:17:11.941 { 00:17:11.941 "code": -22, 00:17:11.941 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:17:11.941 } 00:17:11.941 06:12:18 -- target/tls.sh@36 -- # killprocess 1135706 00:17:11.941 06:12:18 -- common/autotest_common.sh@926 -- # '[' -z 1135706 ']' 00:17:11.941 06:12:18 -- common/autotest_common.sh@930 -- # kill -0 1135706 00:17:11.941 06:12:18 -- common/autotest_common.sh@931 -- # uname 00:17:11.941 06:12:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:11.941 06:12:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1135706 00:17:11.941 06:12:18 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:11.941 06:12:18 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:11.941 06:12:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1135706' 00:17:11.941 killing process with pid 1135706 00:17:11.941 06:12:18 -- common/autotest_common.sh@945 -- # kill 1135706 00:17:11.941 Received shutdown signal, test time was about 10.000000 seconds 00:17:11.941 00:17:11.941 Latency(us) 00:17:11.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.941 =================================================================================================================== 00:17:11.941 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:11.941 06:12:18 -- common/autotest_common.sh@950 -- # wait 1135706 00:17:12.199 06:12:18 -- target/tls.sh@37 -- # return 1 00:17:12.199 06:12:18 -- common/autotest_common.sh@643 -- # es=1 00:17:12.199 06:12:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:12.199 06:12:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:12.199 06:12:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:12.199 06:12:18 -- target/tls.sh@183 -- # killprocess 1133415 00:17:12.199 06:12:18 -- common/autotest_common.sh@926 -- # '[' -z 1133415 ']' 00:17:12.199 06:12:18 -- common/autotest_common.sh@930 -- # kill -0 1133415 00:17:12.199 06:12:18 -- common/autotest_common.sh@931 -- # uname 00:17:12.199 06:12:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:12.199 06:12:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1133415 00:17:12.456 06:12:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:12.456 06:12:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:12.456 06:12:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1133415' 00:17:12.456 killing process with pid 1133415 00:17:12.456 06:12:18 -- common/autotest_common.sh@945 -- # kill 1133415 00:17:12.456 06:12:18 -- common/autotest_common.sh@950 -- # wait 1133415 00:17:12.717 06:12:19 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:12.717 06:12:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:12.717 06:12:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:12.717 06:12:19 -- common/autotest_common.sh@10 -- # set +x 00:17:12.717 06:12:19 -- nvmf/common.sh@469 -- # nvmfpid=1135989 00:17:12.717 06:12:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:12.717 06:12:19 -- nvmf/common.sh@470 -- # waitforlisten 1135989 00:17:12.717 06:12:19 -- common/autotest_common.sh@819 -- # '[' -z 1135989 ']' 00:17:12.717 06:12:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.717 06:12:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:12.717 06:12:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.717 06:12:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:12.717 06:12:19 -- common/autotest_common.sh@10 -- # set +x 00:17:12.717 [2024-07-13 06:12:19.055246] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:12.717 [2024-07-13 06:12:19.055330] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.717 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.717 [2024-07-13 06:12:19.122720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.974 [2024-07-13 06:12:19.234339] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:12.974 [2024-07-13 06:12:19.234494] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.974 [2024-07-13 06:12:19.234514] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.974 [2024-07-13 06:12:19.234528] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.974 [2024-07-13 06:12:19.234566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.539 06:12:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:13.539 06:12:19 -- common/autotest_common.sh@852 -- # return 0 00:17:13.539 06:12:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:13.539 06:12:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:13.539 06:12:19 -- common/autotest_common.sh@10 -- # set +x 00:17:13.539 06:12:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.539 06:12:19 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:13.539 06:12:19 -- common/autotest_common.sh@640 -- # local es=0 00:17:13.539 06:12:19 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:13.539 06:12:19 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:17:13.539 06:12:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:13.539 06:12:19 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:17:13.539 06:12:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:13.539 06:12:19 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:13.539 06:12:19 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:13.539 06:12:19 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:13.797 [2024-07-13 06:12:20.226593] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.797 06:12:20 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:14.054 06:12:20 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:14.312 [2024-07-13 06:12:20.699805] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:14.312 [2024-07-13 06:12:20.700062] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.312 06:12:20 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:14.570 malloc0 00:17:14.570 06:12:20 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:14.829 06:12:21 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:15.087 [2024-07-13 06:12:21.405523] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:15.087 [2024-07-13 06:12:21.405584] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:15.087 [2024-07-13 06:12:21.405609] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:15.087 request: 00:17:15.087 { 00:17:15.087 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:15.087 "host": "nqn.2016-06.io.spdk:host1", 00:17:15.087 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:15.087 "method": "nvmf_subsystem_add_host", 00:17:15.087 "req_id": 1 00:17:15.087 } 00:17:15.087 Got JSON-RPC error response 00:17:15.087 response: 00:17:15.087 { 00:17:15.087 "code": -32603, 00:17:15.087 "message": "Internal error" 00:17:15.087 } 00:17:15.087 06:12:21 -- common/autotest_common.sh@643 -- # es=1 00:17:15.087 06:12:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:15.087 06:12:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:15.087 06:12:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:15.087 06:12:21 -- target/tls.sh@189 -- # killprocess 1135989 00:17:15.087 06:12:21 -- common/autotest_common.sh@926 -- # '[' -z 1135989 ']' 00:17:15.087 06:12:21 -- common/autotest_common.sh@930 -- # kill -0 1135989 00:17:15.087 06:12:21 -- common/autotest_common.sh@931 -- # uname 00:17:15.087 06:12:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:15.087 06:12:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1135989 00:17:15.087 06:12:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:15.087 06:12:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:15.087 06:12:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1135989' 00:17:15.087 killing process with pid 1135989 00:17:15.087 06:12:21 -- common/autotest_common.sh@945 -- # kill 1135989 00:17:15.087 06:12:21 -- common/autotest_common.sh@950 -- # wait 1135989 00:17:15.355 06:12:21 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:15.355 06:12:21 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:15.355 06:12:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:15.355 06:12:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:15.355 06:12:21 -- common/autotest_common.sh@10 -- # set +x 00:17:15.355 06:12:21 -- nvmf/common.sh@469 -- # nvmfpid=1136303 00:17:15.355 06:12:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:15.355 06:12:21 -- nvmf/common.sh@470 -- # waitforlisten 1136303 00:17:15.355 06:12:21 -- common/autotest_common.sh@819 -- # '[' -z 1136303 ']' 00:17:15.355 06:12:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.355 06:12:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:15.355 06:12:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.355 06:12:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:15.355 06:12:21 -- common/autotest_common.sh@10 -- # set +x 00:17:15.355 [2024-07-13 06:12:21.803697] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:15.355 [2024-07-13 06:12:21.803789] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.355 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.660 [2024-07-13 06:12:21.871241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.660 [2024-07-13 06:12:21.983916] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:15.660 [2024-07-13 06:12:21.984083] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.660 [2024-07-13 06:12:21.984102] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.660 [2024-07-13 06:12:21.984116] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.660 [2024-07-13 06:12:21.984159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.591 06:12:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:16.591 06:12:22 -- common/autotest_common.sh@852 -- # return 0 00:17:16.591 06:12:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:16.591 06:12:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:16.591 06:12:22 -- common/autotest_common.sh@10 -- # set +x 00:17:16.591 06:12:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.591 06:12:22 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:16.591 06:12:22 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:16.591 06:12:22 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:16.591 [2024-07-13 06:12:22.989384] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.591 06:12:23 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:16.847 06:12:23 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:17.104 [2024-07-13 06:12:23.470741] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:17.104 [2024-07-13 06:12:23.470984] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:17.104 06:12:23 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:17.361 malloc0 00:17:17.361 06:12:23 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:17.618 06:12:23 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:17.877 06:12:24 -- target/tls.sh@197 -- # bdevperf_pid=1136599 00:17:17.877 06:12:24 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:17.877 06:12:24 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:17.877 06:12:24 -- target/tls.sh@200 -- # waitforlisten 1136599 /var/tmp/bdevperf.sock 00:17:17.877 06:12:24 -- common/autotest_common.sh@819 -- # '[' -z 1136599 ']' 00:17:17.877 06:12:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:17.877 06:12:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:17.877 06:12:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:17.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:17.877 06:12:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:17.877 06:12:24 -- common/autotest_common.sh@10 -- # set +x 00:17:17.877 [2024-07-13 06:12:24.223834] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:17.877 [2024-07-13 06:12:24.223936] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136599 ] 00:17:17.877 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.877 [2024-07-13 06:12:24.283032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.135 [2024-07-13 06:12:24.398257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.068 06:12:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:19.068 06:12:25 -- common/autotest_common.sh@852 -- # return 0 00:17:19.068 06:12:25 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:19.068 [2024-07-13 06:12:25.478800] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:19.068 TLSTESTn1 00:17:19.068 06:12:25 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:19.635 06:12:25 -- target/tls.sh@205 -- # tgtconf='{ 00:17:19.635 "subsystems": [ 00:17:19.635 { 00:17:19.635 "subsystem": "iobuf", 00:17:19.635 "config": [ 00:17:19.635 { 00:17:19.635 "method": "iobuf_set_options", 00:17:19.635 "params": { 00:17:19.635 "small_pool_count": 8192, 00:17:19.635 "large_pool_count": 1024, 00:17:19.635 "small_bufsize": 8192, 00:17:19.635 "large_bufsize": 135168 00:17:19.635 } 00:17:19.635 } 00:17:19.635 ] 00:17:19.635 }, 00:17:19.635 { 00:17:19.635 "subsystem": "sock", 00:17:19.635 "config": [ 00:17:19.635 { 00:17:19.635 "method": "sock_impl_set_options", 00:17:19.635 "params": { 00:17:19.635 "impl_name": "posix", 00:17:19.635 "recv_buf_size": 2097152, 00:17:19.635 "send_buf_size": 2097152, 00:17:19.635 "enable_recv_pipe": true, 00:17:19.635 "enable_quickack": false, 00:17:19.635 "enable_placement_id": 0, 00:17:19.635 "enable_zerocopy_send_server": true, 00:17:19.635 "enable_zerocopy_send_client": false, 00:17:19.635 "zerocopy_threshold": 0, 00:17:19.635 "tls_version": 0, 00:17:19.635 "enable_ktls": false 00:17:19.635 } 00:17:19.635 }, 00:17:19.635 { 00:17:19.635 "method": "sock_impl_set_options", 00:17:19.635 "params": { 00:17:19.635 "impl_name": "ssl", 00:17:19.635 "recv_buf_size": 4096, 00:17:19.635 "send_buf_size": 4096, 00:17:19.635 "enable_recv_pipe": true, 00:17:19.635 "enable_quickack": false, 00:17:19.635 "enable_placement_id": 0, 00:17:19.635 "enable_zerocopy_send_server": true, 00:17:19.635 "enable_zerocopy_send_client": false, 00:17:19.635 "zerocopy_threshold": 0, 00:17:19.635 "tls_version": 0, 00:17:19.635 "enable_ktls": false 00:17:19.635 } 00:17:19.635 } 00:17:19.635 ] 00:17:19.635 }, 00:17:19.635 { 00:17:19.635 "subsystem": "vmd", 00:17:19.635 "config": [] 00:17:19.635 }, 00:17:19.635 { 00:17:19.635 "subsystem": "accel", 00:17:19.635 "config": [ 00:17:19.635 { 00:17:19.635 "method": "accel_set_options", 00:17:19.635 "params": { 00:17:19.635 "small_cache_size": 128, 00:17:19.635 "large_cache_size": 16, 00:17:19.635 "task_count": 2048, 00:17:19.635 "sequence_count": 2048, 00:17:19.635 "buf_count": 2048 00:17:19.635 } 00:17:19.635 } 00:17:19.635 ] 00:17:19.635 }, 00:17:19.635 { 00:17:19.635 "subsystem": "bdev", 00:17:19.635 "config": [ 00:17:19.635 { 00:17:19.635 "method": "bdev_set_options", 00:17:19.635 "params": { 00:17:19.635 "bdev_io_pool_size": 65535, 00:17:19.635 "bdev_io_cache_size": 256, 00:17:19.635 "bdev_auto_examine": true, 00:17:19.635 "iobuf_small_cache_size": 128, 00:17:19.635 "iobuf_large_cache_size": 16 00:17:19.635 } 00:17:19.635 }, 00:17:19.635 { 00:17:19.635 "method": "bdev_raid_set_options", 00:17:19.635 "params": { 00:17:19.635 "process_window_size_kb": 1024 00:17:19.635 } 00:17:19.635 }, 00:17:19.635 { 00:17:19.635 "method": "bdev_iscsi_set_options", 00:17:19.635 "params": { 00:17:19.635 "timeout_sec": 30 00:17:19.635 } 00:17:19.635 }, 00:17:19.635 { 00:17:19.635 "method": "bdev_nvme_set_options", 00:17:19.635 "params": { 00:17:19.635 "action_on_timeout": "none", 00:17:19.635 "timeout_us": 0, 00:17:19.635 "timeout_admin_us": 0, 00:17:19.635 "keep_alive_timeout_ms": 10000, 00:17:19.635 "transport_retry_count": 4, 00:17:19.635 "arbitration_burst": 0, 00:17:19.635 "low_priority_weight": 0, 00:17:19.635 "medium_priority_weight": 0, 00:17:19.635 "high_priority_weight": 0, 00:17:19.635 "nvme_adminq_poll_period_us": 10000, 00:17:19.635 "nvme_ioq_poll_period_us": 0, 00:17:19.635 "io_queue_requests": 0, 00:17:19.635 "delay_cmd_submit": true, 00:17:19.635 "bdev_retry_count": 3, 00:17:19.635 "transport_ack_timeout": 0, 00:17:19.635 "ctrlr_loss_timeout_sec": 0, 00:17:19.635 "reconnect_delay_sec": 0, 00:17:19.635 "fast_io_fail_timeout_sec": 0, 00:17:19.635 "generate_uuids": false, 00:17:19.635 "transport_tos": 0, 00:17:19.635 "io_path_stat": false, 00:17:19.635 "allow_accel_sequence": false 00:17:19.635 } 00:17:19.635 }, 00:17:19.635 { 00:17:19.635 "method": "bdev_nvme_set_hotplug", 00:17:19.635 "params": { 00:17:19.635 "period_us": 100000, 00:17:19.635 "enable": false 00:17:19.635 } 00:17:19.635 }, 00:17:19.635 { 00:17:19.635 "method": "bdev_malloc_create", 00:17:19.635 "params": { 00:17:19.635 "name": "malloc0", 00:17:19.635 "num_blocks": 8192, 00:17:19.635 "block_size": 4096, 00:17:19.635 "physical_block_size": 4096, 00:17:19.635 "uuid": "2116a445-3cd4-4520-8916-1a18449af0d9", 00:17:19.635 "optimal_io_boundary": 0 00:17:19.635 } 00:17:19.635 }, 00:17:19.635 { 00:17:19.635 "method": "bdev_wait_for_examine" 00:17:19.635 } 00:17:19.635 ] 00:17:19.635 }, 00:17:19.635 { 00:17:19.635 "subsystem": "nbd", 00:17:19.635 "config": [] 00:17:19.635 }, 00:17:19.635 { 00:17:19.635 "subsystem": "scheduler", 00:17:19.635 "config": [ 00:17:19.635 { 00:17:19.635 "method": "framework_set_scheduler", 00:17:19.635 "params": { 00:17:19.635 "name": "static" 00:17:19.635 } 00:17:19.635 } 00:17:19.635 ] 00:17:19.635 }, 00:17:19.635 { 00:17:19.635 "subsystem": "nvmf", 00:17:19.635 "config": [ 00:17:19.635 { 00:17:19.635 "method": "nvmf_set_config", 00:17:19.635 "params": { 00:17:19.635 "discovery_filter": "match_any", 00:17:19.635 "admin_cmd_passthru": { 00:17:19.635 "identify_ctrlr": false 00:17:19.635 } 00:17:19.635 } 00:17:19.635 }, 00:17:19.635 { 00:17:19.635 "method": "nvmf_set_max_subsystems", 00:17:19.635 "params": { 00:17:19.635 "max_subsystems": 1024 00:17:19.635 } 00:17:19.635 }, 00:17:19.635 { 00:17:19.635 "method": "nvmf_set_crdt", 00:17:19.635 "params": { 00:17:19.635 "crdt1": 0, 00:17:19.635 "crdt2": 0, 00:17:19.635 "crdt3": 0 00:17:19.635 } 00:17:19.635 }, 00:17:19.635 { 00:17:19.635 "method": "nvmf_create_transport", 00:17:19.635 "params": { 00:17:19.635 "trtype": "TCP", 00:17:19.635 "max_queue_depth": 128, 00:17:19.635 "max_io_qpairs_per_ctrlr": 127, 00:17:19.635 "in_capsule_data_size": 4096, 00:17:19.635 "max_io_size": 131072, 00:17:19.635 "io_unit_size": 131072, 00:17:19.635 "max_aq_depth": 128, 00:17:19.635 "num_shared_buffers": 511, 00:17:19.635 "buf_cache_size": 4294967295, 00:17:19.635 "dif_insert_or_strip": false, 00:17:19.635 "zcopy": false, 00:17:19.635 "c2h_success": false, 00:17:19.635 "sock_priority": 0, 00:17:19.635 "abort_timeout_sec": 1 00:17:19.635 } 00:17:19.635 }, 00:17:19.636 { 00:17:19.636 "method": "nvmf_create_subsystem", 00:17:19.636 "params": { 00:17:19.636 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.636 "allow_any_host": false, 00:17:19.636 "serial_number": "SPDK00000000000001", 00:17:19.636 "model_number": "SPDK bdev Controller", 00:17:19.636 "max_namespaces": 10, 00:17:19.636 "min_cntlid": 1, 00:17:19.636 "max_cntlid": 65519, 00:17:19.636 "ana_reporting": false 00:17:19.636 } 00:17:19.636 }, 00:17:19.636 { 00:17:19.636 "method": "nvmf_subsystem_add_host", 00:17:19.636 "params": { 00:17:19.636 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.636 "host": "nqn.2016-06.io.spdk:host1", 00:17:19.636 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:17:19.636 } 00:17:19.636 }, 00:17:19.636 { 00:17:19.636 "method": "nvmf_subsystem_add_ns", 00:17:19.636 "params": { 00:17:19.636 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.636 "namespace": { 00:17:19.636 "nsid": 1, 00:17:19.636 "bdev_name": "malloc0", 00:17:19.636 "nguid": "2116A4453CD4452089161A18449AF0D9", 00:17:19.636 "uuid": "2116a445-3cd4-4520-8916-1a18449af0d9" 00:17:19.636 } 00:17:19.636 } 00:17:19.636 }, 00:17:19.636 { 00:17:19.636 "method": "nvmf_subsystem_add_listener", 00:17:19.636 "params": { 00:17:19.636 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.636 "listen_address": { 00:17:19.636 "trtype": "TCP", 00:17:19.636 "adrfam": "IPv4", 00:17:19.636 "traddr": "10.0.0.2", 00:17:19.636 "trsvcid": "4420" 00:17:19.636 }, 00:17:19.636 "secure_channel": true 00:17:19.636 } 00:17:19.636 } 00:17:19.636 ] 00:17:19.636 } 00:17:19.636 ] 00:17:19.636 }' 00:17:19.636 06:12:25 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:19.894 06:12:26 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:19.894 "subsystems": [ 00:17:19.894 { 00:17:19.894 "subsystem": "iobuf", 00:17:19.894 "config": [ 00:17:19.894 { 00:17:19.894 "method": "iobuf_set_options", 00:17:19.894 "params": { 00:17:19.894 "small_pool_count": 8192, 00:17:19.894 "large_pool_count": 1024, 00:17:19.894 "small_bufsize": 8192, 00:17:19.894 "large_bufsize": 135168 00:17:19.894 } 00:17:19.894 } 00:17:19.894 ] 00:17:19.894 }, 00:17:19.894 { 00:17:19.894 "subsystem": "sock", 00:17:19.894 "config": [ 00:17:19.894 { 00:17:19.894 "method": "sock_impl_set_options", 00:17:19.894 "params": { 00:17:19.894 "impl_name": "posix", 00:17:19.894 "recv_buf_size": 2097152, 00:17:19.894 "send_buf_size": 2097152, 00:17:19.894 "enable_recv_pipe": true, 00:17:19.894 "enable_quickack": false, 00:17:19.894 "enable_placement_id": 0, 00:17:19.894 "enable_zerocopy_send_server": true, 00:17:19.894 "enable_zerocopy_send_client": false, 00:17:19.894 "zerocopy_threshold": 0, 00:17:19.894 "tls_version": 0, 00:17:19.894 "enable_ktls": false 00:17:19.894 } 00:17:19.894 }, 00:17:19.894 { 00:17:19.894 "method": "sock_impl_set_options", 00:17:19.894 "params": { 00:17:19.894 "impl_name": "ssl", 00:17:19.894 "recv_buf_size": 4096, 00:17:19.894 "send_buf_size": 4096, 00:17:19.894 "enable_recv_pipe": true, 00:17:19.894 "enable_quickack": false, 00:17:19.894 "enable_placement_id": 0, 00:17:19.894 "enable_zerocopy_send_server": true, 00:17:19.894 "enable_zerocopy_send_client": false, 00:17:19.894 "zerocopy_threshold": 0, 00:17:19.894 "tls_version": 0, 00:17:19.894 "enable_ktls": false 00:17:19.894 } 00:17:19.894 } 00:17:19.894 ] 00:17:19.894 }, 00:17:19.894 { 00:17:19.894 "subsystem": "vmd", 00:17:19.894 "config": [] 00:17:19.894 }, 00:17:19.894 { 00:17:19.894 "subsystem": "accel", 00:17:19.894 "config": [ 00:17:19.894 { 00:17:19.894 "method": "accel_set_options", 00:17:19.894 "params": { 00:17:19.894 "small_cache_size": 128, 00:17:19.894 "large_cache_size": 16, 00:17:19.894 "task_count": 2048, 00:17:19.894 "sequence_count": 2048, 00:17:19.894 "buf_count": 2048 00:17:19.894 } 00:17:19.894 } 00:17:19.894 ] 00:17:19.894 }, 00:17:19.894 { 00:17:19.894 "subsystem": "bdev", 00:17:19.894 "config": [ 00:17:19.894 { 00:17:19.894 "method": "bdev_set_options", 00:17:19.894 "params": { 00:17:19.894 "bdev_io_pool_size": 65535, 00:17:19.894 "bdev_io_cache_size": 256, 00:17:19.894 "bdev_auto_examine": true, 00:17:19.894 "iobuf_small_cache_size": 128, 00:17:19.894 "iobuf_large_cache_size": 16 00:17:19.894 } 00:17:19.894 }, 00:17:19.894 { 00:17:19.894 "method": "bdev_raid_set_options", 00:17:19.894 "params": { 00:17:19.894 "process_window_size_kb": 1024 00:17:19.894 } 00:17:19.894 }, 00:17:19.894 { 00:17:19.894 "method": "bdev_iscsi_set_options", 00:17:19.894 "params": { 00:17:19.894 "timeout_sec": 30 00:17:19.894 } 00:17:19.894 }, 00:17:19.894 { 00:17:19.894 "method": "bdev_nvme_set_options", 00:17:19.894 "params": { 00:17:19.894 "action_on_timeout": "none", 00:17:19.894 "timeout_us": 0, 00:17:19.894 "timeout_admin_us": 0, 00:17:19.894 "keep_alive_timeout_ms": 10000, 00:17:19.894 "transport_retry_count": 4, 00:17:19.894 "arbitration_burst": 0, 00:17:19.895 "low_priority_weight": 0, 00:17:19.895 "medium_priority_weight": 0, 00:17:19.895 "high_priority_weight": 0, 00:17:19.895 "nvme_adminq_poll_period_us": 10000, 00:17:19.895 "nvme_ioq_poll_period_us": 0, 00:17:19.895 "io_queue_requests": 512, 00:17:19.895 "delay_cmd_submit": true, 00:17:19.895 "bdev_retry_count": 3, 00:17:19.895 "transport_ack_timeout": 0, 00:17:19.895 "ctrlr_loss_timeout_sec": 0, 00:17:19.895 "reconnect_delay_sec": 0, 00:17:19.895 "fast_io_fail_timeout_sec": 0, 00:17:19.895 "generate_uuids": false, 00:17:19.895 "transport_tos": 0, 00:17:19.895 "io_path_stat": false, 00:17:19.895 "allow_accel_sequence": false 00:17:19.895 } 00:17:19.895 }, 00:17:19.895 { 00:17:19.895 "method": "bdev_nvme_attach_controller", 00:17:19.895 "params": { 00:17:19.895 "name": "TLSTEST", 00:17:19.895 "trtype": "TCP", 00:17:19.895 "adrfam": "IPv4", 00:17:19.895 "traddr": "10.0.0.2", 00:17:19.895 "trsvcid": "4420", 00:17:19.895 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.895 "prchk_reftag": false, 00:17:19.895 "prchk_guard": false, 00:17:19.895 "ctrlr_loss_timeout_sec": 0, 00:17:19.895 "reconnect_delay_sec": 0, 00:17:19.895 "fast_io_fail_timeout_sec": 0, 00:17:19.895 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:19.895 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.895 "hdgst": false, 00:17:19.895 "ddgst": false 00:17:19.895 } 00:17:19.895 }, 00:17:19.895 { 00:17:19.895 "method": "bdev_nvme_set_hotplug", 00:17:19.895 "params": { 00:17:19.895 "period_us": 100000, 00:17:19.895 "enable": false 00:17:19.895 } 00:17:19.895 }, 00:17:19.895 { 00:17:19.895 "method": "bdev_wait_for_examine" 00:17:19.895 } 00:17:19.895 ] 00:17:19.895 }, 00:17:19.895 { 00:17:19.895 "subsystem": "nbd", 00:17:19.895 "config": [] 00:17:19.895 } 00:17:19.895 ] 00:17:19.895 }' 00:17:19.895 06:12:26 -- target/tls.sh@208 -- # killprocess 1136599 00:17:19.895 06:12:26 -- common/autotest_common.sh@926 -- # '[' -z 1136599 ']' 00:17:19.895 06:12:26 -- common/autotest_common.sh@930 -- # kill -0 1136599 00:17:19.895 06:12:26 -- common/autotest_common.sh@931 -- # uname 00:17:19.895 06:12:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:19.895 06:12:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1136599 00:17:19.895 06:12:26 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:19.895 06:12:26 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:19.895 06:12:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1136599' 00:17:19.895 killing process with pid 1136599 00:17:19.895 06:12:26 -- common/autotest_common.sh@945 -- # kill 1136599 00:17:19.895 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.895 00:17:19.895 Latency(us) 00:17:19.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.895 =================================================================================================================== 00:17:19.895 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:19.895 06:12:26 -- common/autotest_common.sh@950 -- # wait 1136599 00:17:20.152 06:12:26 -- target/tls.sh@209 -- # killprocess 1136303 00:17:20.152 06:12:26 -- common/autotest_common.sh@926 -- # '[' -z 1136303 ']' 00:17:20.152 06:12:26 -- common/autotest_common.sh@930 -- # kill -0 1136303 00:17:20.152 06:12:26 -- common/autotest_common.sh@931 -- # uname 00:17:20.152 06:12:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:20.152 06:12:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1136303 00:17:20.152 06:12:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:20.152 06:12:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:20.152 06:12:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1136303' 00:17:20.152 killing process with pid 1136303 00:17:20.152 06:12:26 -- common/autotest_common.sh@945 -- # kill 1136303 00:17:20.152 06:12:26 -- common/autotest_common.sh@950 -- # wait 1136303 00:17:20.410 06:12:26 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:20.410 06:12:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:20.410 06:12:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:20.410 06:12:26 -- target/tls.sh@212 -- # echo '{ 00:17:20.410 "subsystems": [ 00:17:20.410 { 00:17:20.410 "subsystem": "iobuf", 00:17:20.410 "config": [ 00:17:20.410 { 00:17:20.410 "method": "iobuf_set_options", 00:17:20.410 "params": { 00:17:20.410 "small_pool_count": 8192, 00:17:20.410 "large_pool_count": 1024, 00:17:20.410 "small_bufsize": 8192, 00:17:20.410 "large_bufsize": 135168 00:17:20.410 } 00:17:20.410 } 00:17:20.410 ] 00:17:20.410 }, 00:17:20.410 { 00:17:20.410 "subsystem": "sock", 00:17:20.410 "config": [ 00:17:20.410 { 00:17:20.410 "method": "sock_impl_set_options", 00:17:20.410 "params": { 00:17:20.410 "impl_name": "posix", 00:17:20.410 "recv_buf_size": 2097152, 00:17:20.410 "send_buf_size": 2097152, 00:17:20.410 "enable_recv_pipe": true, 00:17:20.410 "enable_quickack": false, 00:17:20.410 "enable_placement_id": 0, 00:17:20.410 "enable_zerocopy_send_server": true, 00:17:20.410 "enable_zerocopy_send_client": false, 00:17:20.410 "zerocopy_threshold": 0, 00:17:20.410 "tls_version": 0, 00:17:20.410 "enable_ktls": false 00:17:20.410 } 00:17:20.410 }, 00:17:20.410 { 00:17:20.410 "method": "sock_impl_set_options", 00:17:20.410 "params": { 00:17:20.410 "impl_name": "ssl", 00:17:20.410 "recv_buf_size": 4096, 00:17:20.410 "send_buf_size": 4096, 00:17:20.410 "enable_recv_pipe": true, 00:17:20.410 "enable_quickack": false, 00:17:20.410 "enable_placement_id": 0, 00:17:20.410 "enable_zerocopy_send_server": true, 00:17:20.410 "enable_zerocopy_send_client": false, 00:17:20.410 "zerocopy_threshold": 0, 00:17:20.410 "tls_version": 0, 00:17:20.410 "enable_ktls": false 00:17:20.410 } 00:17:20.410 } 00:17:20.410 ] 00:17:20.410 }, 00:17:20.410 { 00:17:20.410 "subsystem": "vmd", 00:17:20.410 "config": [] 00:17:20.410 }, 00:17:20.410 { 00:17:20.410 "subsystem": "accel", 00:17:20.410 "config": [ 00:17:20.410 { 00:17:20.410 "method": "accel_set_options", 00:17:20.410 "params": { 00:17:20.410 "small_cache_size": 128, 00:17:20.410 "large_cache_size": 16, 00:17:20.410 "task_count": 2048, 00:17:20.410 "sequence_count": 2048, 00:17:20.410 "buf_count": 2048 00:17:20.410 } 00:17:20.410 } 00:17:20.410 ] 00:17:20.410 }, 00:17:20.410 { 00:17:20.410 "subsystem": "bdev", 00:17:20.410 "config": [ 00:17:20.410 { 00:17:20.410 "method": "bdev_set_options", 00:17:20.410 "params": { 00:17:20.410 "bdev_io_pool_size": 65535, 00:17:20.410 "bdev_io_cache_size": 256, 00:17:20.410 "bdev_auto_examine": true, 00:17:20.410 "iobuf_small_cache_size": 128, 00:17:20.410 "iobuf_large_cache_size": 16 00:17:20.410 } 00:17:20.410 }, 00:17:20.410 { 00:17:20.410 "method": "bdev_raid_set_options", 00:17:20.410 "params": { 00:17:20.410 "process_window_size_kb": 1024 00:17:20.410 } 00:17:20.410 }, 00:17:20.410 { 00:17:20.410 "method": "bdev_iscsi_set_options", 00:17:20.410 "params": { 00:17:20.410 "timeout_sec": 30 00:17:20.410 } 00:17:20.410 }, 00:17:20.410 { 00:17:20.410 "method": "bdev_nvme_set_options", 00:17:20.410 "params": { 00:17:20.410 "action_on_timeout": "none", 00:17:20.410 "timeout_us": 0, 00:17:20.410 "timeout_admin_us": 0, 00:17:20.410 "keep_alive_timeout_ms": 10000, 00:17:20.410 "transport_retry_count": 4, 00:17:20.410 "arbitration_burst": 0, 00:17:20.410 "low_priority_weight": 0, 00:17:20.410 "medium_priority_weight": 0, 00:17:20.410 "high_priority_weight": 0, 00:17:20.410 "nvme_adminq_poll_period_us": 10000, 00:17:20.410 "nvme_ioq_poll_period_us": 0, 00:17:20.410 "io_queue_requests": 0, 00:17:20.410 "delay_cmd_submit": true, 00:17:20.410 "bdev_retry_count": 3, 00:17:20.410 "transport_ack_timeout": 0, 00:17:20.410 "ctrlr_loss_timeout_sec": 0, 00:17:20.410 "reconnect_delay_sec": 0, 00:17:20.410 "fast_io_fail_timeout_sec": 0, 00:17:20.410 "generate_uuids": false, 00:17:20.410 "transport_tos": 0, 00:17:20.410 "io_path_stat": false, 00:17:20.410 "allow_accel_sequence": false 00:17:20.410 } 00:17:20.410 }, 00:17:20.410 { 00:17:20.410 "method": "bdev_nvme_set_hotplug", 00:17:20.410 "params": { 00:17:20.410 "period_us": 100000, 00:17:20.410 "enable": false 00:17:20.410 } 00:17:20.410 }, 00:17:20.410 { 00:17:20.410 "method": "bdev_malloc_create", 00:17:20.410 "params": { 00:17:20.410 "name": "malloc0", 00:17:20.410 "num_blocks": 8192, 00:17:20.410 "block_size": 4096, 00:17:20.410 "physical_block_size": 4096, 00:17:20.410 "uuid": "2116a445-3cd4-4520-8916-1a18449af0d9", 00:17:20.410 "optimal_io_boundary": 0 00:17:20.410 } 00:17:20.410 }, 00:17:20.410 { 00:17:20.410 "method": "bdev_wait_for_examine" 00:17:20.410 } 00:17:20.410 ] 00:17:20.410 }, 00:17:20.410 { 00:17:20.410 "subsystem": "nbd", 00:17:20.410 "config": [] 00:17:20.410 }, 00:17:20.410 { 00:17:20.410 "subsystem": "scheduler", 00:17:20.410 "config": [ 00:17:20.410 { 00:17:20.410 "method": "framework_set_scheduler", 00:17:20.410 "params": { 00:17:20.410 "name": "static" 00:17:20.410 } 00:17:20.410 } 00:17:20.410 ] 00:17:20.410 }, 00:17:20.410 { 00:17:20.410 "subsystem": "nvmf", 00:17:20.410 "config": [ 00:17:20.410 { 00:17:20.410 "method": "nvmf_set_config", 00:17:20.410 "params": { 00:17:20.410 "discovery_filter": "match_any", 00:17:20.410 "admin_cmd_passthru": { 00:17:20.410 "identify_ctrlr": false 00:17:20.410 } 00:17:20.410 } 00:17:20.410 }, 00:17:20.410 { 00:17:20.410 "method": "nvmf_set_max_subsystems", 00:17:20.410 "params": { 00:17:20.410 "max_subsystems": 1024 00:17:20.410 } 00:17:20.410 }, 00:17:20.410 { 00:17:20.410 "method": "nvmf_set_crdt", 00:17:20.410 "params": { 00:17:20.410 "crdt1": 0, 00:17:20.410 "crdt2": 0, 00:17:20.410 "crdt3": 0 00:17:20.410 } 00:17:20.410 }, 00:17:20.410 { 00:17:20.410 "method": "nvmf_create_transport", 00:17:20.410 "params": { 00:17:20.410 "trtype": "TCP", 00:17:20.410 "max_queue_depth": 128, 00:17:20.410 "max_io_qpairs_per_ctrlr": 127, 00:17:20.410 "in_capsule_data_size": 4096, 00:17:20.410 "max_io_size": 131072, 00:17:20.410 "io_unit_size": 131072, 00:17:20.410 "max_aq_depth": 128, 00:17:20.411 "num_shared_buffers": 511, 00:17:20.411 "buf_cache_size": 4294967295, 00:17:20.411 "dif_insert_or_strip": false, 00:17:20.411 "zcopy": false, 00:17:20.411 "c2h_success": false, 00:17:20.411 "sock_priority": 0, 00:17:20.411 "abort_timeout_sec": 1 00:17:20.411 } 00:17:20.411 }, 00:17:20.411 { 00:17:20.411 "method": "nvmf_create_subsystem", 00:17:20.411 "params": { 00:17:20.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.411 "allow_any_host": false, 00:17:20.411 "serial_number": "SPDK00000000000001", 00:17:20.411 "model_number": "SPDK bdev Controller", 00:17:20.411 "max_namespaces": 10, 00:17:20.411 "min_cntlid": 1, 00:17:20.411 "max_cntlid": 65519, 00:17:20.411 "ana_reporting": false 00:17:20.411 } 00:17:20.411 }, 00:17:20.411 { 00:17:20.411 "method": "nvmf_subsystem_add_host", 00:17:20.411 "params": { 00:17:20.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.411 "host": "nqn.2016-06.io.spdk:host1", 00:17:20.411 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:17:20.411 } 00:17:20.411 }, 00:17:20.411 { 00:17:20.411 "method": "nvmf_subsystem_add_ns", 00:17:20.411 "params": { 00:17:20.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.411 "namespace": { 00:17:20.411 "nsid": 1, 00:17:20.411 "bdev_name": "malloc0", 00:17:20.411 "nguid": "2116A4453CD4452089161A18449AF0D9", 00:17:20.411 "uuid": "2116a445-3cd4-4520-8916-1a18449af0d9" 00:17:20.411 } 00:17:20.411 } 00:17:20.411 }, 00:17:20.411 { 00:17:20.411 "method": "nvmf_subsystem_add_listener", 00:17:20.411 "params": { 00:17:20.411 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:20.411 "listen_address": { 00:17:20.411 "trtype": "TCP", 00:17:20.411 "adrfam": "IPv4", 00:17:20.411 "traddr": "10.0.0.2", 00:17:20.411 "trsvcid": "4420" 00:17:20.411 }, 00:17:20.411 "secure_channel": true 00:17:20.411 } 00:17:20.411 } 00:17:20.411 ] 00:17:20.411 } 00:17:20.411 ] 00:17:20.411 }' 00:17:20.411 06:12:26 -- common/autotest_common.sh@10 -- # set +x 00:17:20.411 06:12:26 -- nvmf/common.sh@469 -- # nvmfpid=1136961 00:17:20.411 06:12:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:20.411 06:12:26 -- nvmf/common.sh@470 -- # waitforlisten 1136961 00:17:20.411 06:12:26 -- common/autotest_common.sh@819 -- # '[' -z 1136961 ']' 00:17:20.411 06:12:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.411 06:12:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:20.411 06:12:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.411 06:12:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:20.411 06:12:26 -- common/autotest_common.sh@10 -- # set +x 00:17:20.411 [2024-07-13 06:12:26.815304] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:20.411 [2024-07-13 06:12:26.815384] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.411 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.411 [2024-07-13 06:12:26.885792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.668 [2024-07-13 06:12:26.995529] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:20.668 [2024-07-13 06:12:26.995692] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.668 [2024-07-13 06:12:26.995709] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.668 [2024-07-13 06:12:26.995721] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.668 [2024-07-13 06:12:26.995749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.926 [2024-07-13 06:12:27.228315] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.926 [2024-07-13 06:12:27.260335] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:20.926 [2024-07-13 06:12:27.260558] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.490 06:12:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:21.490 06:12:27 -- common/autotest_common.sh@852 -- # return 0 00:17:21.490 06:12:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:21.490 06:12:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:21.490 06:12:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.490 06:12:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.490 06:12:27 -- target/tls.sh@216 -- # bdevperf_pid=1137053 00:17:21.490 06:12:27 -- target/tls.sh@217 -- # waitforlisten 1137053 /var/tmp/bdevperf.sock 00:17:21.490 06:12:27 -- common/autotest_common.sh@819 -- # '[' -z 1137053 ']' 00:17:21.490 06:12:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.490 06:12:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:21.490 06:12:27 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:21.490 06:12:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.490 06:12:27 -- target/tls.sh@213 -- # echo '{ 00:17:21.490 "subsystems": [ 00:17:21.490 { 00:17:21.490 "subsystem": "iobuf", 00:17:21.490 "config": [ 00:17:21.490 { 00:17:21.490 "method": "iobuf_set_options", 00:17:21.490 "params": { 00:17:21.490 "small_pool_count": 8192, 00:17:21.490 "large_pool_count": 1024, 00:17:21.490 "small_bufsize": 8192, 00:17:21.490 "large_bufsize": 135168 00:17:21.490 } 00:17:21.490 } 00:17:21.490 ] 00:17:21.490 }, 00:17:21.490 { 00:17:21.490 "subsystem": "sock", 00:17:21.490 "config": [ 00:17:21.490 { 00:17:21.490 "method": "sock_impl_set_options", 00:17:21.490 "params": { 00:17:21.490 "impl_name": "posix", 00:17:21.490 "recv_buf_size": 2097152, 00:17:21.490 "send_buf_size": 2097152, 00:17:21.490 "enable_recv_pipe": true, 00:17:21.490 "enable_quickack": false, 00:17:21.490 "enable_placement_id": 0, 00:17:21.490 "enable_zerocopy_send_server": true, 00:17:21.490 "enable_zerocopy_send_client": false, 00:17:21.490 "zerocopy_threshold": 0, 00:17:21.490 "tls_version": 0, 00:17:21.490 "enable_ktls": false 00:17:21.490 } 00:17:21.490 }, 00:17:21.490 { 00:17:21.490 "method": "sock_impl_set_options", 00:17:21.490 "params": { 00:17:21.490 "impl_name": "ssl", 00:17:21.490 "recv_buf_size": 4096, 00:17:21.490 "send_buf_size": 4096, 00:17:21.490 "enable_recv_pipe": true, 00:17:21.490 "enable_quickack": false, 00:17:21.490 "enable_placement_id": 0, 00:17:21.490 "enable_zerocopy_send_server": true, 00:17:21.490 "enable_zerocopy_send_client": false, 00:17:21.490 "zerocopy_threshold": 0, 00:17:21.490 "tls_version": 0, 00:17:21.490 "enable_ktls": false 00:17:21.490 } 00:17:21.490 } 00:17:21.490 ] 00:17:21.490 }, 00:17:21.490 { 00:17:21.490 "subsystem": "vmd", 00:17:21.490 "config": [] 00:17:21.490 }, 00:17:21.490 { 00:17:21.490 "subsystem": "accel", 00:17:21.490 "config": [ 00:17:21.490 { 00:17:21.490 "method": "accel_set_options", 00:17:21.490 "params": { 00:17:21.490 "small_cache_size": 128, 00:17:21.490 "large_cache_size": 16, 00:17:21.491 "task_count": 2048, 00:17:21.491 "sequence_count": 2048, 00:17:21.491 "buf_count": 2048 00:17:21.491 } 00:17:21.491 } 00:17:21.491 ] 00:17:21.491 }, 00:17:21.491 { 00:17:21.491 "subsystem": "bdev", 00:17:21.491 "config": [ 00:17:21.491 { 00:17:21.491 "method": "bdev_set_options", 00:17:21.491 "params": { 00:17:21.491 "bdev_io_pool_size": 65535, 00:17:21.491 "bdev_io_cache_size": 256, 00:17:21.491 "bdev_auto_examine": true, 00:17:21.491 "iobuf_small_cache_size": 128, 00:17:21.491 "iobuf_large_cache_size": 16 00:17:21.491 } 00:17:21.491 }, 00:17:21.491 { 00:17:21.491 "method": "bdev_raid_set_options", 00:17:21.491 "params": { 00:17:21.491 "process_window_size_kb": 1024 00:17:21.491 } 00:17:21.491 }, 00:17:21.491 { 00:17:21.491 "method": "bdev_iscsi_set_options", 00:17:21.491 "params": { 00:17:21.491 "timeout_sec": 30 00:17:21.491 } 00:17:21.491 }, 00:17:21.491 { 00:17:21.491 "method": "bdev_nvme_set_options", 00:17:21.491 "params": { 00:17:21.491 "action_on_timeout": "none", 00:17:21.491 "timeout_us": 0, 00:17:21.491 "timeout_admin_us": 0, 00:17:21.491 "keep_alive_timeout_ms": 10000, 00:17:21.491 "transport_retry_count": 4, 00:17:21.491 "arbitration_burst": 0, 00:17:21.491 "low_priority_weight": 0, 00:17:21.491 "medium_priority_weight": 0, 00:17:21.491 "high_priority_weight": 0, 00:17:21.491 "nvme_adminq_poll_period_us": 10000, 00:17:21.491 "nvme_ioq_poll_period_us": 0, 00:17:21.491 "io_queue_requests": 512, 00:17:21.491 "delay_cmd_submit": true, 00:17:21.491 "bdev_retry_count": 3, 00:17:21.491 "transport_ack_timeout": 0, 00:17:21.491 "ctrlr_loss_timeout_sec": 0, 00:17:21.491 "reconnect_delay_sec": 0, 00:17:21.491 "fast_io_fail_timeout_sec": 0, 00:17:21.491 "generate_uuids": false, 00:17:21.491 "transport_tos": 0, 00:17:21.491 "io_path_stat": false, 00:17:21.491 "allow_accel_sequence": false 00:17:21.491 } 00:17:21.491 }, 00:17:21.491 { 00:17:21.491 "method": "bdev_nvme_attach_controller", 00:17:21.491 "params": { 00:17:21.491 "name": "TLSTEST", 00:17:21.491 "trtype": "TCP", 00:17:21.491 "adrfam": "IPv4", 00:17:21.491 "traddr": "10.0.0.2", 00:17:21.491 "trsvcid": "4420", 00:17:21.491 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.491 "prchk_reftag": false, 00:17:21.491 "prchk_guard": false, 00:17:21.491 "ctrlr_loss_timeout_sec": 0, 00:17:21.491 "reconnect_delay_sec": 0, 00:17:21.491 "fast_io_fail_timeout_sec": 0, 00:17:21.491 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:21.491 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.491 "hdgst": false, 00:17:21.491 "ddgst": false 00:17:21.491 } 00:17:21.491 }, 00:17:21.491 { 00:17:21.491 "method": "bdev_nvme_set_hotplug", 00:17:21.491 "params": { 00:17:21.491 "period_us": 100000, 00:17:21.491 "enable": false 00:17:21.491 } 00:17:21.491 }, 00:17:21.491 { 00:17:21.491 "method": "bdev_wait_for_examine" 00:17:21.491 } 00:17:21.491 ] 00:17:21.491 }, 00:17:21.491 { 00:17:21.491 "subsystem": "nbd", 00:17:21.491 "config": [] 00:17:21.491 } 00:17:21.491 ] 00:17:21.491 }' 00:17:21.491 06:12:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:21.491 06:12:27 -- common/autotest_common.sh@10 -- # set +x 00:17:21.491 [2024-07-13 06:12:27.816773] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:21.491 [2024-07-13 06:12:27.816873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137053 ] 00:17:21.491 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.491 [2024-07-13 06:12:27.876799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.491 [2024-07-13 06:12:27.991963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.748 [2024-07-13 06:12:28.153300] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:22.312 06:12:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:22.312 06:12:28 -- common/autotest_common.sh@852 -- # return 0 00:17:22.312 06:12:28 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:22.569 Running I/O for 10 seconds... 00:17:32.524 00:17:32.524 Latency(us) 00:17:32.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.524 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:32.524 Verification LBA range: start 0x0 length 0x2000 00:17:32.524 TLSTESTn1 : 10.03 2421.69 9.46 0.00 0.00 52780.10 7330.32 63302.92 00:17:32.524 =================================================================================================================== 00:17:32.524 Total : 2421.69 9.46 0.00 0.00 52780.10 7330.32 63302.92 00:17:32.524 0 00:17:32.524 06:12:38 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:32.524 06:12:38 -- target/tls.sh@223 -- # killprocess 1137053 00:17:32.524 06:12:38 -- common/autotest_common.sh@926 -- # '[' -z 1137053 ']' 00:17:32.524 06:12:38 -- common/autotest_common.sh@930 -- # kill -0 1137053 00:17:32.524 06:12:38 -- common/autotest_common.sh@931 -- # uname 00:17:32.524 06:12:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:32.524 06:12:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1137053 00:17:32.524 06:12:39 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:32.524 06:12:39 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:32.524 06:12:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1137053' 00:17:32.524 killing process with pid 1137053 00:17:32.524 06:12:39 -- common/autotest_common.sh@945 -- # kill 1137053 00:17:32.524 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.524 00:17:32.524 Latency(us) 00:17:32.524 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.524 =================================================================================================================== 00:17:32.524 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.524 06:12:39 -- common/autotest_common.sh@950 -- # wait 1137053 00:17:32.782 06:12:39 -- target/tls.sh@224 -- # killprocess 1136961 00:17:32.782 06:12:39 -- common/autotest_common.sh@926 -- # '[' -z 1136961 ']' 00:17:32.782 06:12:39 -- common/autotest_common.sh@930 -- # kill -0 1136961 00:17:32.782 06:12:39 -- common/autotest_common.sh@931 -- # uname 00:17:32.782 06:12:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:32.782 06:12:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1136961 00:17:32.782 06:12:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:32.782 06:12:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:32.782 06:12:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1136961' 00:17:32.782 killing process with pid 1136961 00:17:32.782 06:12:39 -- common/autotest_common.sh@945 -- # kill 1136961 00:17:32.782 06:12:39 -- common/autotest_common.sh@950 -- # wait 1136961 00:17:33.349 06:12:39 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:33.349 06:12:39 -- target/tls.sh@227 -- # cleanup 00:17:33.349 06:12:39 -- target/tls.sh@15 -- # process_shm --id 0 00:17:33.349 06:12:39 -- common/autotest_common.sh@796 -- # type=--id 00:17:33.349 06:12:39 -- common/autotest_common.sh@797 -- # id=0 00:17:33.349 06:12:39 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:33.350 06:12:39 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:33.350 06:12:39 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:33.350 06:12:39 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:33.350 06:12:39 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:33.350 06:12:39 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:33.350 nvmf_trace.0 00:17:33.350 06:12:39 -- common/autotest_common.sh@811 -- # return 0 00:17:33.350 06:12:39 -- target/tls.sh@16 -- # killprocess 1137053 00:17:33.350 06:12:39 -- common/autotest_common.sh@926 -- # '[' -z 1137053 ']' 00:17:33.350 06:12:39 -- common/autotest_common.sh@930 -- # kill -0 1137053 00:17:33.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1137053) - No such process 00:17:33.350 06:12:39 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1137053 is not found' 00:17:33.350 Process with pid 1137053 is not found 00:17:33.350 06:12:39 -- target/tls.sh@17 -- # nvmftestfini 00:17:33.350 06:12:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:33.350 06:12:39 -- nvmf/common.sh@116 -- # sync 00:17:33.350 06:12:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:33.350 06:12:39 -- nvmf/common.sh@119 -- # set +e 00:17:33.350 06:12:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:33.350 06:12:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:33.350 rmmod nvme_tcp 00:17:33.350 rmmod nvme_fabrics 00:17:33.350 rmmod nvme_keyring 00:17:33.350 06:12:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:33.350 06:12:39 -- nvmf/common.sh@123 -- # set -e 00:17:33.350 06:12:39 -- nvmf/common.sh@124 -- # return 0 00:17:33.350 06:12:39 -- nvmf/common.sh@477 -- # '[' -n 1136961 ']' 00:17:33.350 06:12:39 -- nvmf/common.sh@478 -- # killprocess 1136961 00:17:33.350 06:12:39 -- common/autotest_common.sh@926 -- # '[' -z 1136961 ']' 00:17:33.350 06:12:39 -- common/autotest_common.sh@930 -- # kill -0 1136961 00:17:33.350 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1136961) - No such process 00:17:33.350 06:12:39 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1136961 is not found' 00:17:33.350 Process with pid 1136961 is not found 00:17:33.350 06:12:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:33.350 06:12:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:33.350 06:12:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:33.350 06:12:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.350 06:12:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:33.350 06:12:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.350 06:12:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.350 06:12:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.279 06:12:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:35.279 06:12:41 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:35.279 00:17:35.279 real 1m14.304s 00:17:35.279 user 1m53.043s 00:17:35.279 sys 0m26.530s 00:17:35.279 06:12:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:35.279 06:12:41 -- common/autotest_common.sh@10 -- # set +x 00:17:35.279 ************************************ 00:17:35.279 END TEST nvmf_tls 00:17:35.279 ************************************ 00:17:35.279 06:12:41 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:35.279 06:12:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:35.279 06:12:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:35.279 06:12:41 -- common/autotest_common.sh@10 -- # set +x 00:17:35.279 ************************************ 00:17:35.279 START TEST nvmf_fips 00:17:35.279 ************************************ 00:17:35.279 06:12:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:35.279 * Looking for test storage... 00:17:35.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:17:35.279 06:12:41 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.279 06:12:41 -- nvmf/common.sh@7 -- # uname -s 00:17:35.538 06:12:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.538 06:12:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.538 06:12:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.538 06:12:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.538 06:12:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.538 06:12:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.538 06:12:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.538 06:12:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.538 06:12:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.538 06:12:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.538 06:12:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.538 06:12:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.538 06:12:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.538 06:12:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.538 06:12:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.538 06:12:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.538 06:12:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.538 06:12:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.538 06:12:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.538 06:12:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.538 06:12:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.538 06:12:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.538 06:12:41 -- paths/export.sh@5 -- # export PATH 00:17:35.538 06:12:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.538 06:12:41 -- nvmf/common.sh@46 -- # : 0 00:17:35.538 06:12:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:35.538 06:12:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:35.538 06:12:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:35.538 06:12:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.539 06:12:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.539 06:12:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:35.539 06:12:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:35.539 06:12:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:35.539 06:12:41 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:35.539 06:12:41 -- fips/fips.sh@89 -- # check_openssl_version 00:17:35.539 06:12:41 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:35.539 06:12:41 -- fips/fips.sh@85 -- # openssl version 00:17:35.539 06:12:41 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:35.539 06:12:41 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:35.539 06:12:41 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:35.539 06:12:41 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:35.539 06:12:41 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:35.539 06:12:41 -- scripts/common.sh@335 -- # IFS=.-: 00:17:35.539 06:12:41 -- scripts/common.sh@335 -- # read -ra ver1 00:17:35.539 06:12:41 -- scripts/common.sh@336 -- # IFS=.-: 00:17:35.539 06:12:41 -- scripts/common.sh@336 -- # read -ra ver2 00:17:35.539 06:12:41 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:35.539 06:12:41 -- scripts/common.sh@339 -- # ver1_l=3 00:17:35.539 06:12:41 -- scripts/common.sh@340 -- # ver2_l=3 00:17:35.539 06:12:41 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:35.539 06:12:41 -- scripts/common.sh@343 -- # case "$op" in 00:17:35.539 06:12:41 -- scripts/common.sh@347 -- # : 1 00:17:35.539 06:12:41 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:35.539 06:12:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:35.539 06:12:41 -- scripts/common.sh@364 -- # decimal 3 00:17:35.539 06:12:41 -- scripts/common.sh@352 -- # local d=3 00:17:35.539 06:12:41 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:35.539 06:12:41 -- scripts/common.sh@354 -- # echo 3 00:17:35.539 06:12:41 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:35.539 06:12:41 -- scripts/common.sh@365 -- # decimal 3 00:17:35.539 06:12:41 -- scripts/common.sh@352 -- # local d=3 00:17:35.539 06:12:41 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:35.539 06:12:41 -- scripts/common.sh@354 -- # echo 3 00:17:35.539 06:12:41 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:35.539 06:12:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:35.539 06:12:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:35.539 06:12:41 -- scripts/common.sh@363 -- # (( v++ )) 00:17:35.539 06:12:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:35.539 06:12:41 -- scripts/common.sh@364 -- # decimal 0 00:17:35.539 06:12:41 -- scripts/common.sh@352 -- # local d=0 00:17:35.539 06:12:41 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:35.539 06:12:41 -- scripts/common.sh@354 -- # echo 0 00:17:35.539 06:12:41 -- scripts/common.sh@364 -- # ver1[v]=0 00:17:35.539 06:12:41 -- scripts/common.sh@365 -- # decimal 0 00:17:35.539 06:12:41 -- scripts/common.sh@352 -- # local d=0 00:17:35.539 06:12:41 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:35.539 06:12:41 -- scripts/common.sh@354 -- # echo 0 00:17:35.539 06:12:41 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:35.539 06:12:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:35.539 06:12:41 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:35.539 06:12:41 -- scripts/common.sh@363 -- # (( v++ )) 00:17:35.539 06:12:41 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:35.539 06:12:41 -- scripts/common.sh@364 -- # decimal 9 00:17:35.539 06:12:41 -- scripts/common.sh@352 -- # local d=9 00:17:35.539 06:12:41 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:35.539 06:12:41 -- scripts/common.sh@354 -- # echo 9 00:17:35.539 06:12:41 -- scripts/common.sh@364 -- # ver1[v]=9 00:17:35.539 06:12:41 -- scripts/common.sh@365 -- # decimal 0 00:17:35.539 06:12:41 -- scripts/common.sh@352 -- # local d=0 00:17:35.539 06:12:41 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:35.539 06:12:41 -- scripts/common.sh@354 -- # echo 0 00:17:35.539 06:12:41 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:35.539 06:12:41 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:35.539 06:12:41 -- scripts/common.sh@366 -- # return 0 00:17:35.539 06:12:41 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:35.539 06:12:41 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:35.539 06:12:41 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:35.539 06:12:41 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:35.539 06:12:41 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:35.539 06:12:41 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:35.539 06:12:41 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:35.539 06:12:41 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:35.539 06:12:41 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:35.539 06:12:41 -- fips/fips.sh@114 -- # build_openssl_config 00:17:35.539 06:12:41 -- fips/fips.sh@37 -- # cat 00:17:35.539 06:12:41 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:35.539 06:12:41 -- fips/fips.sh@58 -- # cat - 00:17:35.539 06:12:41 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:35.539 06:12:41 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:35.539 06:12:41 -- fips/fips.sh@117 -- # mapfile -t providers 00:17:35.539 06:12:41 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:17:35.539 06:12:41 -- fips/fips.sh@117 -- # openssl list -providers 00:17:35.539 06:12:41 -- fips/fips.sh@117 -- # grep name 00:17:35.539 06:12:41 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:35.539 06:12:41 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:35.539 06:12:41 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:35.539 06:12:41 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:35.539 06:12:41 -- fips/fips.sh@128 -- # : 00:17:35.539 06:12:41 -- common/autotest_common.sh@640 -- # local es=0 00:17:35.539 06:12:41 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:35.539 06:12:41 -- common/autotest_common.sh@628 -- # local arg=openssl 00:17:35.539 06:12:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:35.539 06:12:41 -- common/autotest_common.sh@632 -- # type -t openssl 00:17:35.539 06:12:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:35.539 06:12:41 -- common/autotest_common.sh@634 -- # type -P openssl 00:17:35.539 06:12:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:35.539 06:12:41 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:17:35.539 06:12:41 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:17:35.539 06:12:41 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:17:35.539 Error setting digest 00:17:35.539 0082F72B357F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:35.539 0082F72B357F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:35.539 06:12:41 -- common/autotest_common.sh@643 -- # es=1 00:17:35.539 06:12:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:35.539 06:12:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:35.539 06:12:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:35.539 06:12:41 -- fips/fips.sh@131 -- # nvmftestinit 00:17:35.539 06:12:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:35.539 06:12:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.539 06:12:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:35.539 06:12:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:35.539 06:12:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:35.539 06:12:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.539 06:12:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.539 06:12:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.539 06:12:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:35.539 06:12:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:35.539 06:12:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:35.539 06:12:41 -- common/autotest_common.sh@10 -- # set +x 00:17:37.439 06:12:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:37.439 06:12:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:37.439 06:12:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:37.439 06:12:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:37.439 06:12:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:37.439 06:12:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:37.439 06:12:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:37.439 06:12:43 -- nvmf/common.sh@294 -- # net_devs=() 00:17:37.439 06:12:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:37.439 06:12:43 -- nvmf/common.sh@295 -- # e810=() 00:17:37.439 06:12:43 -- nvmf/common.sh@295 -- # local -ga e810 00:17:37.439 06:12:43 -- nvmf/common.sh@296 -- # x722=() 00:17:37.439 06:12:43 -- nvmf/common.sh@296 -- # local -ga x722 00:17:37.439 06:12:43 -- nvmf/common.sh@297 -- # mlx=() 00:17:37.439 06:12:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:37.439 06:12:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.439 06:12:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.439 06:12:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.439 06:12:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.439 06:12:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.439 06:12:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.439 06:12:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.439 06:12:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.439 06:12:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.439 06:12:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.439 06:12:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.439 06:12:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:37.439 06:12:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:37.439 06:12:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:37.439 06:12:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:37.439 06:12:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:37.439 06:12:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:37.439 06:12:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:37.439 06:12:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:37.439 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:37.439 06:12:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:37.439 06:12:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:37.439 06:12:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.439 06:12:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.439 06:12:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:37.439 06:12:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:37.439 06:12:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:37.439 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:37.439 06:12:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:37.439 06:12:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:37.439 06:12:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.439 06:12:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.439 06:12:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:37.439 06:12:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:37.439 06:12:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:37.439 06:12:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:37.439 06:12:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:37.439 06:12:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.439 06:12:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:37.439 06:12:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.439 06:12:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:37.439 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:37.439 06:12:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.439 06:12:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:37.439 06:12:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.439 06:12:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:37.439 06:12:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.439 06:12:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:37.439 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:37.439 06:12:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.439 06:12:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:37.439 06:12:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:37.439 06:12:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:37.439 06:12:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:37.439 06:12:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:37.439 06:12:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.439 06:12:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.439 06:12:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.439 06:12:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:37.439 06:12:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.439 06:12:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.439 06:12:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:37.439 06:12:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.439 06:12:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.439 06:12:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:37.439 06:12:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:37.439 06:12:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.439 06:12:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.439 06:12:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.439 06:12:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.439 06:12:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:37.439 06:12:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.439 06:12:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.439 06:12:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.439 06:12:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:37.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:17:37.439 00:17:37.439 --- 10.0.0.2 ping statistics --- 00:17:37.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.439 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:17:37.439 06:12:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:17:37.697 00:17:37.697 --- 10.0.0.1 ping statistics --- 00:17:37.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.697 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:17:37.697 06:12:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.697 06:12:43 -- nvmf/common.sh@410 -- # return 0 00:17:37.697 06:12:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:37.697 06:12:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.697 06:12:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:37.697 06:12:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:37.697 06:12:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.697 06:12:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:37.697 06:12:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:37.697 06:12:43 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:37.697 06:12:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:37.697 06:12:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:37.697 06:12:43 -- common/autotest_common.sh@10 -- # set +x 00:17:37.697 06:12:43 -- nvmf/common.sh@469 -- # nvmfpid=1140508 00:17:37.697 06:12:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:37.697 06:12:43 -- nvmf/common.sh@470 -- # waitforlisten 1140508 00:17:37.697 06:12:43 -- common/autotest_common.sh@819 -- # '[' -z 1140508 ']' 00:17:37.697 06:12:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.697 06:12:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:37.697 06:12:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.697 06:12:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:37.697 06:12:43 -- common/autotest_common.sh@10 -- # set +x 00:17:37.697 [2024-07-13 06:12:44.052203] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:37.697 [2024-07-13 06:12:44.052291] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.697 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.697 [2024-07-13 06:12:44.119341] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.956 [2024-07-13 06:12:44.234188] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:37.956 [2024-07-13 06:12:44.234360] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.956 [2024-07-13 06:12:44.234380] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.956 [2024-07-13 06:12:44.234394] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.956 [2024-07-13 06:12:44.234428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.520 06:12:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:38.520 06:12:44 -- common/autotest_common.sh@852 -- # return 0 00:17:38.520 06:12:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:38.520 06:12:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:38.520 06:12:44 -- common/autotest_common.sh@10 -- # set +x 00:17:38.520 06:12:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.520 06:12:45 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:38.520 06:12:45 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:38.520 06:12:45 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:38.520 06:12:45 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:38.520 06:12:45 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:38.520 06:12:45 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:38.520 06:12:45 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:38.520 06:12:45 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:38.778 [2024-07-13 06:12:45.263296] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.778 [2024-07-13 06:12:45.279301] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:38.778 [2024-07-13 06:12:45.279497] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.037 malloc0 00:17:39.037 06:12:45 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:39.037 06:12:45 -- fips/fips.sh@148 -- # bdevperf_pid=1140673 00:17:39.037 06:12:45 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:39.037 06:12:45 -- fips/fips.sh@149 -- # waitforlisten 1140673 /var/tmp/bdevperf.sock 00:17:39.037 06:12:45 -- common/autotest_common.sh@819 -- # '[' -z 1140673 ']' 00:17:39.037 06:12:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:39.037 06:12:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:39.037 06:12:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:39.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:39.037 06:12:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:39.037 06:12:45 -- common/autotest_common.sh@10 -- # set +x 00:17:39.037 [2024-07-13 06:12:45.400239] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:39.037 [2024-07-13 06:12:45.400327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1140673 ] 00:17:39.037 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.037 [2024-07-13 06:12:45.458905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.296 [2024-07-13 06:12:45.565157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:39.863 06:12:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:39.863 06:12:46 -- common/autotest_common.sh@852 -- # return 0 00:17:39.863 06:12:46 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:40.121 [2024-07-13 06:12:46.511331] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.121 TLSTESTn1 00:17:40.121 06:12:46 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:40.378 Running I/O for 10 seconds... 00:17:50.334 00:17:50.334 Latency(us) 00:17:50.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.334 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:50.334 Verification LBA range: start 0x0 length 0x2000 00:17:50.334 TLSTESTn1 : 10.02 3259.78 12.73 0.00 0.00 39208.57 9466.31 45049.93 00:17:50.334 =================================================================================================================== 00:17:50.334 Total : 3259.78 12.73 0.00 0.00 39208.57 9466.31 45049.93 00:17:50.334 0 00:17:50.334 06:12:56 -- fips/fips.sh@1 -- # cleanup 00:17:50.334 06:12:56 -- fips/fips.sh@15 -- # process_shm --id 0 00:17:50.334 06:12:56 -- common/autotest_common.sh@796 -- # type=--id 00:17:50.334 06:12:56 -- common/autotest_common.sh@797 -- # id=0 00:17:50.334 06:12:56 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:50.334 06:12:56 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:50.334 06:12:56 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:50.334 06:12:56 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:50.334 06:12:56 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:50.334 06:12:56 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:50.334 nvmf_trace.0 00:17:50.334 06:12:56 -- common/autotest_common.sh@811 -- # return 0 00:17:50.334 06:12:56 -- fips/fips.sh@16 -- # killprocess 1140673 00:17:50.334 06:12:56 -- common/autotest_common.sh@926 -- # '[' -z 1140673 ']' 00:17:50.334 06:12:56 -- common/autotest_common.sh@930 -- # kill -0 1140673 00:17:50.334 06:12:56 -- common/autotest_common.sh@931 -- # uname 00:17:50.334 06:12:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:50.334 06:12:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1140673 00:17:50.334 06:12:56 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:50.334 06:12:56 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:50.334 06:12:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1140673' 00:17:50.334 killing process with pid 1140673 00:17:50.334 06:12:56 -- common/autotest_common.sh@945 -- # kill 1140673 00:17:50.334 Received shutdown signal, test time was about 10.000000 seconds 00:17:50.334 00:17:50.334 Latency(us) 00:17:50.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.334 =================================================================================================================== 00:17:50.334 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.334 06:12:56 -- common/autotest_common.sh@950 -- # wait 1140673 00:17:50.592 06:12:57 -- fips/fips.sh@17 -- # nvmftestfini 00:17:50.593 06:12:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:50.593 06:12:57 -- nvmf/common.sh@116 -- # sync 00:17:50.593 06:12:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:50.593 06:12:57 -- nvmf/common.sh@119 -- # set +e 00:17:50.593 06:12:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:50.593 06:12:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:50.851 rmmod nvme_tcp 00:17:50.851 rmmod nvme_fabrics 00:17:50.851 rmmod nvme_keyring 00:17:50.851 06:12:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:50.851 06:12:57 -- nvmf/common.sh@123 -- # set -e 00:17:50.851 06:12:57 -- nvmf/common.sh@124 -- # return 0 00:17:50.851 06:12:57 -- nvmf/common.sh@477 -- # '[' -n 1140508 ']' 00:17:50.851 06:12:57 -- nvmf/common.sh@478 -- # killprocess 1140508 00:17:50.851 06:12:57 -- common/autotest_common.sh@926 -- # '[' -z 1140508 ']' 00:17:50.851 06:12:57 -- common/autotest_common.sh@930 -- # kill -0 1140508 00:17:50.851 06:12:57 -- common/autotest_common.sh@931 -- # uname 00:17:50.851 06:12:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:50.851 06:12:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1140508 00:17:50.851 06:12:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:50.851 06:12:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:50.851 06:12:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1140508' 00:17:50.851 killing process with pid 1140508 00:17:50.851 06:12:57 -- common/autotest_common.sh@945 -- # kill 1140508 00:17:50.851 06:12:57 -- common/autotest_common.sh@950 -- # wait 1140508 00:17:51.109 06:12:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:51.109 06:12:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:51.109 06:12:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:51.109 06:12:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:51.109 06:12:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:51.109 06:12:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.109 06:12:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.109 06:12:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.641 06:12:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:53.641 06:12:59 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:53.641 00:17:53.641 real 0m17.798s 00:17:53.641 user 0m22.245s 00:17:53.641 sys 0m6.821s 00:17:53.641 06:12:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:53.641 06:12:59 -- common/autotest_common.sh@10 -- # set +x 00:17:53.641 ************************************ 00:17:53.641 END TEST nvmf_fips 00:17:53.641 ************************************ 00:17:53.641 06:12:59 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:17:53.641 06:12:59 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:53.641 06:12:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:53.641 06:12:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:53.641 06:12:59 -- common/autotest_common.sh@10 -- # set +x 00:17:53.641 ************************************ 00:17:53.641 START TEST nvmf_fuzz 00:17:53.641 ************************************ 00:17:53.641 06:12:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:53.641 * Looking for test storage... 00:17:53.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:53.641 06:12:59 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.641 06:12:59 -- nvmf/common.sh@7 -- # uname -s 00:17:53.641 06:12:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.641 06:12:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.641 06:12:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.641 06:12:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.641 06:12:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.641 06:12:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.641 06:12:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.641 06:12:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.641 06:12:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.641 06:12:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.641 06:12:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:53.641 06:12:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:53.641 06:12:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.641 06:12:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.641 06:12:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.641 06:12:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.641 06:12:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.641 06:12:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.641 06:12:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.641 06:12:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.641 06:12:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.641 06:12:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.641 06:12:59 -- paths/export.sh@5 -- # export PATH 00:17:53.641 06:12:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.641 06:12:59 -- nvmf/common.sh@46 -- # : 0 00:17:53.641 06:12:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:53.641 06:12:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:53.641 06:12:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:53.641 06:12:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.641 06:12:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.641 06:12:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:53.641 06:12:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:53.641 06:12:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:53.641 06:12:59 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:17:53.641 06:12:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:53.641 06:12:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.641 06:12:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:53.641 06:12:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:53.641 06:12:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:53.641 06:12:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.641 06:12:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.641 06:12:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.641 06:12:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:53.641 06:12:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:53.641 06:12:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:53.641 06:12:59 -- common/autotest_common.sh@10 -- # set +x 00:17:55.542 06:13:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:55.542 06:13:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:55.542 06:13:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:55.542 06:13:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:55.542 06:13:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:55.542 06:13:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:55.542 06:13:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:55.542 06:13:01 -- nvmf/common.sh@294 -- # net_devs=() 00:17:55.542 06:13:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:55.542 06:13:01 -- nvmf/common.sh@295 -- # e810=() 00:17:55.542 06:13:01 -- nvmf/common.sh@295 -- # local -ga e810 00:17:55.542 06:13:01 -- nvmf/common.sh@296 -- # x722=() 00:17:55.542 06:13:01 -- nvmf/common.sh@296 -- # local -ga x722 00:17:55.542 06:13:01 -- nvmf/common.sh@297 -- # mlx=() 00:17:55.542 06:13:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:55.542 06:13:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:55.542 06:13:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:55.542 06:13:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:55.542 06:13:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:55.542 06:13:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:55.542 06:13:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:55.542 06:13:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:55.542 06:13:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:55.542 06:13:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:55.542 06:13:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:55.542 06:13:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:55.542 06:13:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:55.542 06:13:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:55.542 06:13:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:55.542 06:13:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:55.542 06:13:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:55.542 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:55.542 06:13:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:55.542 06:13:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:55.542 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:55.542 06:13:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:55.542 06:13:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:55.542 06:13:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.542 06:13:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:55.542 06:13:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.542 06:13:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:55.542 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:55.542 06:13:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.542 06:13:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:55.542 06:13:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.542 06:13:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:55.542 06:13:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.542 06:13:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:55.542 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:55.542 06:13:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.542 06:13:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:55.542 06:13:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:55.542 06:13:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:55.542 06:13:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.542 06:13:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:55.542 06:13:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:55.542 06:13:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:55.542 06:13:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:55.542 06:13:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:55.542 06:13:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:55.542 06:13:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:55.542 06:13:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.542 06:13:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:55.542 06:13:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:55.542 06:13:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:55.542 06:13:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:55.542 06:13:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:55.542 06:13:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:55.542 06:13:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:55.542 06:13:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:55.542 06:13:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:55.542 06:13:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:55.542 06:13:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:55.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:55.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:17:55.542 00:17:55.542 --- 10.0.0.2 ping statistics --- 00:17:55.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.542 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:17:55.542 06:13:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:55.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:55.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:17:55.542 00:17:55.542 --- 10.0.0.1 ping statistics --- 00:17:55.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:55.542 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:17:55.542 06:13:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:55.542 06:13:01 -- nvmf/common.sh@410 -- # return 0 00:17:55.542 06:13:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:55.542 06:13:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:55.542 06:13:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:55.542 06:13:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:55.542 06:13:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:55.542 06:13:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:55.542 06:13:01 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1143981 00:17:55.542 06:13:01 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:55.542 06:13:01 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:55.542 06:13:01 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1143981 00:17:55.542 06:13:01 -- common/autotest_common.sh@819 -- # '[' -z 1143981 ']' 00:17:55.542 06:13:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.542 06:13:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:55.543 06:13:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.543 06:13:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:55.543 06:13:01 -- common/autotest_common.sh@10 -- # set +x 00:17:56.490 06:13:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:56.490 06:13:02 -- common/autotest_common.sh@852 -- # return 0 00:17:56.490 06:13:02 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:56.490 06:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.490 06:13:02 -- common/autotest_common.sh@10 -- # set +x 00:17:56.490 06:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.490 06:13:02 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:17:56.490 06:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.490 06:13:02 -- common/autotest_common.sh@10 -- # set +x 00:17:56.490 Malloc0 00:17:56.490 06:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.490 06:13:02 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:56.490 06:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.490 06:13:02 -- common/autotest_common.sh@10 -- # set +x 00:17:56.490 06:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.490 06:13:02 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:56.490 06:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.490 06:13:02 -- common/autotest_common.sh@10 -- # set +x 00:17:56.490 06:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.490 06:13:02 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.490 06:13:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.490 06:13:02 -- common/autotest_common.sh@10 -- # set +x 00:17:56.490 06:13:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.490 06:13:02 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:17:56.490 06:13:02 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:28.559 Fuzzing completed. Shutting down the fuzz application 00:18:28.559 00:18:28.559 Dumping successful admin opcodes: 00:18:28.559 8, 9, 10, 24, 00:18:28.559 Dumping successful io opcodes: 00:18:28.559 0, 9, 00:18:28.559 NS: 0x200003aeff00 I/O qp, Total commands completed: 459546, total successful commands: 2663, random_seed: 445973696 00:18:28.559 NS: 0x200003aeff00 admin qp, Total commands completed: 57536, total successful commands: 461, random_seed: 254282304 00:18:28.559 06:13:33 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:28.559 Fuzzing completed. Shutting down the fuzz application 00:18:28.559 00:18:28.559 Dumping successful admin opcodes: 00:18:28.559 24, 00:18:28.559 Dumping successful io opcodes: 00:18:28.559 00:18:28.559 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3824891261 00:18:28.559 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3825013070 00:18:28.559 06:13:34 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.559 06:13:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.559 06:13:34 -- common/autotest_common.sh@10 -- # set +x 00:18:28.559 06:13:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.559 06:13:34 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:28.559 06:13:34 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:28.559 06:13:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:28.559 06:13:34 -- nvmf/common.sh@116 -- # sync 00:18:28.559 06:13:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:28.559 06:13:34 -- nvmf/common.sh@119 -- # set +e 00:18:28.559 06:13:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:28.559 06:13:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:28.559 rmmod nvme_tcp 00:18:28.559 rmmod nvme_fabrics 00:18:28.559 rmmod nvme_keyring 00:18:28.559 06:13:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:28.559 06:13:34 -- nvmf/common.sh@123 -- # set -e 00:18:28.559 06:13:34 -- nvmf/common.sh@124 -- # return 0 00:18:28.559 06:13:34 -- nvmf/common.sh@477 -- # '[' -n 1143981 ']' 00:18:28.559 06:13:34 -- nvmf/common.sh@478 -- # killprocess 1143981 00:18:28.559 06:13:34 -- common/autotest_common.sh@926 -- # '[' -z 1143981 ']' 00:18:28.559 06:13:34 -- common/autotest_common.sh@930 -- # kill -0 1143981 00:18:28.559 06:13:34 -- common/autotest_common.sh@931 -- # uname 00:18:28.559 06:13:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:28.559 06:13:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1143981 00:18:28.559 06:13:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:28.559 06:13:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:28.559 06:13:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1143981' 00:18:28.559 killing process with pid 1143981 00:18:28.559 06:13:34 -- common/autotest_common.sh@945 -- # kill 1143981 00:18:28.559 06:13:34 -- common/autotest_common.sh@950 -- # wait 1143981 00:18:28.820 06:13:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:28.820 06:13:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:28.820 06:13:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:28.820 06:13:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:28.820 06:13:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:28.820 06:13:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.820 06:13:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.820 06:13:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.727 06:13:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:30.727 06:13:37 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:18:30.727 00:18:30.727 real 0m37.632s 00:18:30.727 user 0m52.017s 00:18:30.727 sys 0m15.509s 00:18:30.727 06:13:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:30.727 06:13:37 -- common/autotest_common.sh@10 -- # set +x 00:18:30.727 ************************************ 00:18:30.727 END TEST nvmf_fuzz 00:18:30.727 ************************************ 00:18:30.727 06:13:37 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:30.727 06:13:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:30.727 06:13:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:30.727 06:13:37 -- common/autotest_common.sh@10 -- # set +x 00:18:30.727 ************************************ 00:18:30.727 START TEST nvmf_multiconnection 00:18:30.727 ************************************ 00:18:30.727 06:13:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:30.986 * Looking for test storage... 00:18:30.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:30.986 06:13:37 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:30.986 06:13:37 -- nvmf/common.sh@7 -- # uname -s 00:18:30.986 06:13:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.986 06:13:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.986 06:13:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.986 06:13:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.986 06:13:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.986 06:13:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.986 06:13:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.986 06:13:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.986 06:13:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.986 06:13:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.986 06:13:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.986 06:13:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.986 06:13:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.986 06:13:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.986 06:13:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:30.986 06:13:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:30.986 06:13:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.986 06:13:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.986 06:13:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.986 06:13:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.986 06:13:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.986 06:13:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.986 06:13:37 -- paths/export.sh@5 -- # export PATH 00:18:30.986 06:13:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.986 06:13:37 -- nvmf/common.sh@46 -- # : 0 00:18:30.986 06:13:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:30.986 06:13:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:30.986 06:13:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:30.986 06:13:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.986 06:13:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.986 06:13:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:30.986 06:13:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:30.986 06:13:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:30.986 06:13:37 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:30.986 06:13:37 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:30.986 06:13:37 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:30.986 06:13:37 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:30.986 06:13:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:30.986 06:13:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.986 06:13:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:30.986 06:13:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:30.986 06:13:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:30.986 06:13:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.986 06:13:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.986 06:13:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.986 06:13:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:30.986 06:13:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:30.986 06:13:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:30.986 06:13:37 -- common/autotest_common.sh@10 -- # set +x 00:18:32.890 06:13:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:32.890 06:13:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:32.890 06:13:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:32.890 06:13:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:32.890 06:13:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:32.890 06:13:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:32.890 06:13:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:32.890 06:13:39 -- nvmf/common.sh@294 -- # net_devs=() 00:18:32.890 06:13:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:32.890 06:13:39 -- nvmf/common.sh@295 -- # e810=() 00:18:32.890 06:13:39 -- nvmf/common.sh@295 -- # local -ga e810 00:18:32.890 06:13:39 -- nvmf/common.sh@296 -- # x722=() 00:18:32.890 06:13:39 -- nvmf/common.sh@296 -- # local -ga x722 00:18:32.890 06:13:39 -- nvmf/common.sh@297 -- # mlx=() 00:18:32.890 06:13:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:32.890 06:13:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.890 06:13:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.890 06:13:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.890 06:13:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.890 06:13:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.890 06:13:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.890 06:13:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.890 06:13:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.890 06:13:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.890 06:13:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.890 06:13:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.890 06:13:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:32.890 06:13:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:32.890 06:13:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:32.890 06:13:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:32.890 06:13:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:32.890 06:13:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:32.890 06:13:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:32.890 06:13:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:32.890 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:32.890 06:13:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:32.890 06:13:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:32.890 06:13:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.890 06:13:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.890 06:13:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:32.890 06:13:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:32.891 06:13:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:32.891 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:32.891 06:13:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:32.891 06:13:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:32.891 06:13:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.891 06:13:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.891 06:13:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:32.891 06:13:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:32.891 06:13:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:32.891 06:13:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:32.891 06:13:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:32.891 06:13:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.891 06:13:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:32.891 06:13:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.891 06:13:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:32.891 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:32.891 06:13:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.891 06:13:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:32.891 06:13:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.891 06:13:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:32.891 06:13:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.891 06:13:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:32.891 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:32.891 06:13:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.891 06:13:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:32.891 06:13:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:32.891 06:13:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:32.891 06:13:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:32.891 06:13:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:32.891 06:13:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.891 06:13:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.891 06:13:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.891 06:13:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:32.891 06:13:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:32.891 06:13:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:32.891 06:13:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:32.891 06:13:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:32.891 06:13:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.891 06:13:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:32.891 06:13:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:32.891 06:13:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:32.891 06:13:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.891 06:13:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.891 06:13:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.891 06:13:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:32.891 06:13:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.891 06:13:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.891 06:13:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.891 06:13:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:32.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:18:32.891 00:18:32.891 --- 10.0.0.2 ping statistics --- 00:18:32.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.891 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:18:32.891 06:13:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:18:32.891 00:18:32.891 --- 10.0.0.1 ping statistics --- 00:18:32.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.891 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:18:32.891 06:13:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.891 06:13:39 -- nvmf/common.sh@410 -- # return 0 00:18:32.891 06:13:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:32.891 06:13:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.891 06:13:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:32.891 06:13:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:32.891 06:13:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.891 06:13:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:32.891 06:13:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:32.891 06:13:39 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:32.891 06:13:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:32.891 06:13:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:32.891 06:13:39 -- common/autotest_common.sh@10 -- # set +x 00:18:32.891 06:13:39 -- nvmf/common.sh@469 -- # nvmfpid=1149982 00:18:32.891 06:13:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:32.891 06:13:39 -- nvmf/common.sh@470 -- # waitforlisten 1149982 00:18:32.891 06:13:39 -- common/autotest_common.sh@819 -- # '[' -z 1149982 ']' 00:18:32.891 06:13:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.891 06:13:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:32.891 06:13:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.891 06:13:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:32.891 06:13:39 -- common/autotest_common.sh@10 -- # set +x 00:18:33.150 [2024-07-13 06:13:39.435724] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:33.150 [2024-07-13 06:13:39.435795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.150 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.150 [2024-07-13 06:13:39.498692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:33.150 [2024-07-13 06:13:39.605132] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:33.150 [2024-07-13 06:13:39.605277] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.150 [2024-07-13 06:13:39.605309] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.150 [2024-07-13 06:13:39.605323] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.150 [2024-07-13 06:13:39.605621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.150 [2024-07-13 06:13:39.605640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.150 [2024-07-13 06:13:39.605691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:33.150 [2024-07-13 06:13:39.605693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.084 06:13:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:34.084 06:13:40 -- common/autotest_common.sh@852 -- # return 0 00:18:34.084 06:13:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:34.084 06:13:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:34.084 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.084 06:13:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.084 06:13:40 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:34.084 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.084 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.084 [2024-07-13 06:13:40.470658] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.084 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.084 06:13:40 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:34.084 06:13:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.084 06:13:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:34.084 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.084 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.084 Malloc1 00:18:34.084 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.084 06:13:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:34.084 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.084 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.084 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.084 06:13:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:34.084 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.084 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.084 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.084 06:13:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:34.084 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.084 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.084 [2024-07-13 06:13:40.526215] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.084 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.084 06:13:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.084 06:13:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:34.084 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.084 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.084 Malloc2 00:18:34.084 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.084 06:13:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:34.084 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.084 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.084 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.084 06:13:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:34.084 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.084 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.084 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.084 06:13:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:34.084 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.084 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.084 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.084 06:13:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.084 06:13:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:34.084 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.084 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.345 Malloc3 00:18:34.345 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.345 06:13:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:34.345 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.345 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.345 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.345 06:13:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:34.345 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.345 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.345 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.345 06:13:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:34.345 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.345 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.345 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.345 06:13:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.345 06:13:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:34.345 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.345 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.345 Malloc4 00:18:34.345 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.345 06:13:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:34.345 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.345 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.345 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.345 06:13:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:34.345 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.345 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.345 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.345 06:13:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:34.345 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.345 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.345 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.345 06:13:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.345 06:13:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:34.345 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.345 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.345 Malloc5 00:18:34.345 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.345 06:13:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:34.345 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.345 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.345 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.345 06:13:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:34.345 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.345 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.345 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.345 06:13:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:34.345 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.345 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.345 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.345 06:13:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.345 06:13:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:34.345 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.345 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.345 Malloc6 00:18:34.345 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.345 06:13:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:34.345 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.345 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.345 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.345 06:13:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:34.345 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.345 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.345 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.345 06:13:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:34.345 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.345 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.345 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.345 06:13:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.345 06:13:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:34.345 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.345 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.345 Malloc7 00:18:34.345 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.346 06:13:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:34.346 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.346 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.346 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.346 06:13:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:34.346 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.346 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.346 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.346 06:13:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:34.346 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.346 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.346 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.346 06:13:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.346 06:13:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:34.346 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.346 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.346 Malloc8 00:18:34.346 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.346 06:13:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:34.346 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.346 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.605 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.605 06:13:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:34.605 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.605 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.605 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.605 06:13:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:34.605 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.605 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.605 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.605 06:13:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.605 06:13:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:34.605 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.605 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.605 Malloc9 00:18:34.605 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.605 06:13:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:34.605 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.605 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.605 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.605 06:13:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:34.605 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.605 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.605 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.605 06:13:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:34.605 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.605 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.605 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.605 06:13:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.605 06:13:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:34.605 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.605 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.605 Malloc10 00:18:34.605 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.605 06:13:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:34.605 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.605 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.605 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.605 06:13:40 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:34.605 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.605 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.606 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.606 06:13:40 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:34.606 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.606 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.606 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.606 06:13:40 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.606 06:13:40 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:34.606 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.606 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.606 Malloc11 00:18:34.606 06:13:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.606 06:13:40 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:34.606 06:13:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.606 06:13:40 -- common/autotest_common.sh@10 -- # set +x 00:18:34.606 06:13:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.606 06:13:41 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:34.606 06:13:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.606 06:13:41 -- common/autotest_common.sh@10 -- # set +x 00:18:34.606 06:13:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.606 06:13:41 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:34.606 06:13:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.606 06:13:41 -- common/autotest_common.sh@10 -- # set +x 00:18:34.606 06:13:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.606 06:13:41 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:34.606 06:13:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.606 06:13:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:35.173 06:13:41 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:35.173 06:13:41 -- common/autotest_common.sh@1177 -- # local i=0 00:18:35.173 06:13:41 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:35.173 06:13:41 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:35.173 06:13:41 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:37.710 06:13:43 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:37.710 06:13:43 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:37.710 06:13:43 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:18:37.710 06:13:43 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:37.710 06:13:43 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:37.710 06:13:43 -- common/autotest_common.sh@1187 -- # return 0 00:18:37.710 06:13:43 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:37.710 06:13:43 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:37.968 06:13:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:37.968 06:13:44 -- common/autotest_common.sh@1177 -- # local i=0 00:18:37.968 06:13:44 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:37.968 06:13:44 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:37.968 06:13:44 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:39.874 06:13:46 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:39.874 06:13:46 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:39.874 06:13:46 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:18:39.874 06:13:46 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:39.874 06:13:46 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:39.874 06:13:46 -- common/autotest_common.sh@1187 -- # return 0 00:18:39.874 06:13:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:39.874 06:13:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:40.813 06:13:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:40.813 06:13:47 -- common/autotest_common.sh@1177 -- # local i=0 00:18:40.814 06:13:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:40.814 06:13:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:40.814 06:13:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:42.717 06:13:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:42.717 06:13:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:42.717 06:13:49 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:18:42.717 06:13:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:42.717 06:13:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:42.717 06:13:49 -- common/autotest_common.sh@1187 -- # return 0 00:18:42.717 06:13:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:42.717 06:13:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:43.654 06:13:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:43.654 06:13:49 -- common/autotest_common.sh@1177 -- # local i=0 00:18:43.654 06:13:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:43.654 06:13:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:43.654 06:13:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:45.558 06:13:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:45.558 06:13:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:45.558 06:13:51 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:18:45.558 06:13:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:45.558 06:13:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:45.558 06:13:51 -- common/autotest_common.sh@1187 -- # return 0 00:18:45.558 06:13:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:45.558 06:13:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:46.122 06:13:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:46.122 06:13:52 -- common/autotest_common.sh@1177 -- # local i=0 00:18:46.122 06:13:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:46.122 06:13:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:46.122 06:13:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:48.655 06:13:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:48.655 06:13:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:48.655 06:13:54 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:18:48.655 06:13:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:48.655 06:13:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:48.655 06:13:54 -- common/autotest_common.sh@1187 -- # return 0 00:18:48.655 06:13:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.655 06:13:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:48.913 06:13:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:48.913 06:13:55 -- common/autotest_common.sh@1177 -- # local i=0 00:18:48.913 06:13:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:48.913 06:13:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:48.913 06:13:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:50.819 06:13:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:50.819 06:13:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:50.819 06:13:57 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:18:50.819 06:13:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:50.819 06:13:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:50.819 06:13:57 -- common/autotest_common.sh@1187 -- # return 0 00:18:50.819 06:13:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:50.819 06:13:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:51.753 06:13:58 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:51.753 06:13:58 -- common/autotest_common.sh@1177 -- # local i=0 00:18:51.753 06:13:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:51.753 06:13:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:51.753 06:13:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:53.654 06:14:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:53.654 06:14:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:53.654 06:14:00 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:18:53.654 06:14:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:53.654 06:14:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:53.654 06:14:00 -- common/autotest_common.sh@1187 -- # return 0 00:18:53.654 06:14:00 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:53.654 06:14:00 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:54.601 06:14:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:54.601 06:14:00 -- common/autotest_common.sh@1177 -- # local i=0 00:18:54.601 06:14:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:54.601 06:14:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:54.601 06:14:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:56.498 06:14:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:56.499 06:14:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:56.499 06:14:02 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:18:56.499 06:14:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:56.499 06:14:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:56.499 06:14:02 -- common/autotest_common.sh@1187 -- # return 0 00:18:56.499 06:14:02 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:56.499 06:14:02 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:57.063 06:14:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:57.063 06:14:03 -- common/autotest_common.sh@1177 -- # local i=0 00:18:57.063 06:14:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:57.063 06:14:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:57.063 06:14:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:59.587 06:14:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:59.587 06:14:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:59.587 06:14:05 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:18:59.587 06:14:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:59.587 06:14:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:59.587 06:14:05 -- common/autotest_common.sh@1187 -- # return 0 00:18:59.587 06:14:05 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:59.587 06:14:05 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:19:00.153 06:14:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:00.153 06:14:06 -- common/autotest_common.sh@1177 -- # local i=0 00:19:00.153 06:14:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:00.153 06:14:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:00.153 06:14:06 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:02.050 06:14:08 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:02.050 06:14:08 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:02.050 06:14:08 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:19:02.050 06:14:08 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:02.050 06:14:08 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:02.050 06:14:08 -- common/autotest_common.sh@1187 -- # return 0 00:19:02.050 06:14:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:02.050 06:14:08 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:19:02.984 06:14:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:02.984 06:14:09 -- common/autotest_common.sh@1177 -- # local i=0 00:19:02.984 06:14:09 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:02.984 06:14:09 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:02.984 06:14:09 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:04.881 06:14:11 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:04.881 06:14:11 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:04.881 06:14:11 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:19:04.881 06:14:11 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:04.881 06:14:11 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.881 06:14:11 -- common/autotest_common.sh@1187 -- # return 0 00:19:04.881 06:14:11 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:04.881 [global] 00:19:04.881 thread=1 00:19:04.881 invalidate=1 00:19:04.881 rw=read 00:19:04.881 time_based=1 00:19:04.881 runtime=10 00:19:04.881 ioengine=libaio 00:19:04.881 direct=1 00:19:04.881 bs=262144 00:19:04.882 iodepth=64 00:19:04.882 norandommap=1 00:19:04.882 numjobs=1 00:19:04.882 00:19:04.882 [job0] 00:19:04.882 filename=/dev/nvme0n1 00:19:04.882 [job1] 00:19:04.882 filename=/dev/nvme10n1 00:19:04.882 [job2] 00:19:04.882 filename=/dev/nvme1n1 00:19:04.882 [job3] 00:19:04.882 filename=/dev/nvme2n1 00:19:04.882 [job4] 00:19:04.882 filename=/dev/nvme3n1 00:19:04.882 [job5] 00:19:04.882 filename=/dev/nvme4n1 00:19:04.882 [job6] 00:19:04.882 filename=/dev/nvme5n1 00:19:04.882 [job7] 00:19:04.882 filename=/dev/nvme6n1 00:19:04.882 [job8] 00:19:04.882 filename=/dev/nvme7n1 00:19:04.882 [job9] 00:19:04.882 filename=/dev/nvme8n1 00:19:04.882 [job10] 00:19:04.882 filename=/dev/nvme9n1 00:19:05.138 Could not set queue depth (nvme0n1) 00:19:05.138 Could not set queue depth (nvme10n1) 00:19:05.138 Could not set queue depth (nvme1n1) 00:19:05.138 Could not set queue depth (nvme2n1) 00:19:05.138 Could not set queue depth (nvme3n1) 00:19:05.138 Could not set queue depth (nvme4n1) 00:19:05.138 Could not set queue depth (nvme5n1) 00:19:05.138 Could not set queue depth (nvme6n1) 00:19:05.138 Could not set queue depth (nvme7n1) 00:19:05.138 Could not set queue depth (nvme8n1) 00:19:05.138 Could not set queue depth (nvme9n1) 00:19:05.138 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.138 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.138 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.138 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.138 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.138 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.138 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.138 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.138 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.138 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.138 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:05.138 fio-3.35 00:19:05.138 Starting 11 threads 00:19:17.340 00:19:17.340 job0: (groupid=0, jobs=1): err= 0: pid=1154356: Sat Jul 13 06:14:22 2024 00:19:17.340 read: IOPS=494, BW=124MiB/s (130MB/s)(1254MiB/10148msec) 00:19:17.340 slat (usec): min=9, max=693279, avg=1441.43, stdev=13600.74 00:19:17.340 clat (msec): min=3, max=1223, avg=127.98, stdev=124.57 00:19:17.340 lat (msec): min=3, max=1223, avg=129.42, stdev=126.19 00:19:17.340 clat percentiles (msec): 00:19:17.340 | 1.00th=[ 12], 5.00th=[ 24], 10.00th=[ 40], 20.00th=[ 65], 00:19:17.340 | 30.00th=[ 81], 40.00th=[ 96], 50.00th=[ 111], 60.00th=[ 124], 00:19:17.340 | 70.00th=[ 138], 80.00th=[ 153], 90.00th=[ 194], 95.00th=[ 226], 00:19:17.340 | 99.00th=[ 718], 99.50th=[ 1083], 99.90th=[ 1083], 99.95th=[ 1083], 00:19:17.340 | 99.99th=[ 1217] 00:19:17.340 bw ( KiB/s): min= 8704, max=227840, per=6.67%, avg=126745.60, stdev=59399.31, samples=20 00:19:17.340 iops : min= 34, max= 890, avg=495.10, stdev=232.03, samples=20 00:19:17.340 lat (msec) : 4=0.04%, 10=0.64%, 20=3.31%, 50=9.53%, 100=28.61% 00:19:17.340 lat (msec) : 250=53.68%, 500=1.95%, 750=1.28%, 1000=0.04%, 2000=0.92% 00:19:17.340 cpu : usr=0.32%, sys=1.70%, ctx=1402, majf=0, minf=4097 00:19:17.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:19:17.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.340 issued rwts: total=5015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.340 job1: (groupid=0, jobs=1): err= 0: pid=1154357: Sat Jul 13 06:14:22 2024 00:19:17.340 read: IOPS=469, BW=117MiB/s (123MB/s)(1191MiB/10152msec) 00:19:17.340 slat (usec): min=9, max=561515, avg=1165.11, stdev=10193.47 00:19:17.340 clat (msec): min=3, max=1066, avg=135.10, stdev=115.56 00:19:17.340 lat (msec): min=3, max=1087, avg=136.27, stdev=117.26 00:19:17.340 clat percentiles (msec): 00:19:17.340 | 1.00th=[ 12], 5.00th=[ 48], 10.00th=[ 64], 20.00th=[ 83], 00:19:17.340 | 30.00th=[ 95], 40.00th=[ 105], 50.00th=[ 115], 60.00th=[ 127], 00:19:17.340 | 70.00th=[ 138], 80.00th=[ 153], 90.00th=[ 192], 95.00th=[ 228], 00:19:17.340 | 99.00th=[ 844], 99.50th=[ 953], 99.90th=[ 1053], 99.95th=[ 1053], 00:19:17.340 | 99.99th=[ 1070] 00:19:17.340 bw ( KiB/s): min= 6144, max=180736, per=6.33%, avg=120345.60, stdev=49869.15, samples=20 00:19:17.340 iops : min= 24, max= 706, avg=470.10, stdev=194.80, samples=20 00:19:17.340 lat (msec) : 4=0.04%, 10=0.80%, 20=1.09%, 50=3.82%, 100=29.55% 00:19:17.340 lat (msec) : 250=60.04%, 500=2.96%, 750=0.38%, 1000=0.90%, 2000=0.42% 00:19:17.340 cpu : usr=0.32%, sys=1.51%, ctx=1569, majf=0, minf=4097 00:19:17.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:17.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.340 issued rwts: total=4765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.340 job2: (groupid=0, jobs=1): err= 0: pid=1154362: Sat Jul 13 06:14:22 2024 00:19:17.340 read: IOPS=596, BW=149MiB/s (156MB/s)(1513MiB/10149msec) 00:19:17.340 slat (usec): min=9, max=183905, avg=1388.88, stdev=6144.96 00:19:17.340 clat (usec): min=1769, max=938905, avg=105845.84, stdev=102610.33 00:19:17.340 lat (usec): min=1789, max=1025.8k, avg=107234.71, stdev=103796.06 00:19:17.340 clat percentiles (msec): 00:19:17.340 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 44], 20.00th=[ 57], 00:19:17.340 | 30.00th=[ 65], 40.00th=[ 75], 50.00th=[ 86], 60.00th=[ 94], 00:19:17.340 | 70.00th=[ 110], 80.00th=[ 130], 90.00th=[ 165], 95.00th=[ 218], 00:19:17.340 | 99.00th=[ 651], 99.50th=[ 802], 99.90th=[ 885], 99.95th=[ 885], 00:19:17.340 | 99.99th=[ 936] 00:19:17.340 bw ( KiB/s): min=33280, max=264704, per=8.06%, avg=153292.80, stdev=62100.12, samples=20 00:19:17.340 iops : min= 130, max= 1034, avg=598.70, stdev=242.61, samples=20 00:19:17.340 lat (msec) : 2=0.05%, 4=1.70%, 10=3.50%, 20=1.67%, 50=5.57% 00:19:17.340 lat (msec) : 100=52.48%, 250=31.38%, 500=1.67%, 750=1.21%, 1000=0.78% 00:19:17.340 cpu : usr=0.41%, sys=2.13%, ctx=1463, majf=0, minf=4097 00:19:17.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:17.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.340 issued rwts: total=6052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.340 job3: (groupid=0, jobs=1): err= 0: pid=1154363: Sat Jul 13 06:14:22 2024 00:19:17.340 read: IOPS=789, BW=197MiB/s (207MB/s)(2003MiB/10151msec) 00:19:17.340 slat (usec): min=11, max=345963, avg=1227.42, stdev=6914.59 00:19:17.340 clat (msec): min=8, max=1060, avg=79.81, stdev=91.53 00:19:17.340 lat (msec): min=8, max=1060, avg=81.04, stdev=92.86 00:19:17.340 clat percentiles (msec): 00:19:17.340 | 1.00th=[ 28], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 50], 00:19:17.340 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 64], 60.00th=[ 71], 00:19:17.340 | 70.00th=[ 80], 80.00th=[ 89], 90.00th=[ 100], 95.00th=[ 111], 00:19:17.340 | 99.00th=[ 535], 99.50th=[ 869], 99.90th=[ 1003], 99.95th=[ 1003], 00:19:17.340 | 99.99th=[ 1062] 00:19:17.340 bw ( KiB/s): min=13824, max=442368, per=10.70%, avg=203443.20, stdev=105865.44, samples=20 00:19:17.340 iops : min= 54, max= 1728, avg=794.70, stdev=413.54, samples=20 00:19:17.340 lat (msec) : 10=0.01%, 20=0.21%, 50=20.28%, 100=70.45%, 250=6.33% 00:19:17.340 lat (msec) : 500=1.49%, 750=0.69%, 1000=0.41%, 2000=0.12% 00:19:17.340 cpu : usr=0.41%, sys=2.92%, ctx=1669, majf=0, minf=4097 00:19:17.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:17.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.340 issued rwts: total=8011,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.340 job4: (groupid=0, jobs=1): err= 0: pid=1154364: Sat Jul 13 06:14:22 2024 00:19:17.340 read: IOPS=536, BW=134MiB/s (141MB/s)(1361MiB/10155msec) 00:19:17.340 slat (usec): min=9, max=630679, avg=1185.07, stdev=10937.19 00:19:17.340 clat (msec): min=2, max=1247, avg=118.13, stdev=108.71 00:19:17.340 lat (msec): min=2, max=1247, avg=119.31, stdev=110.32 00:19:17.340 clat percentiles (msec): 00:19:17.340 | 1.00th=[ 21], 5.00th=[ 44], 10.00th=[ 54], 20.00th=[ 68], 00:19:17.340 | 30.00th=[ 78], 40.00th=[ 84], 50.00th=[ 92], 60.00th=[ 109], 00:19:17.340 | 70.00th=[ 124], 80.00th=[ 142], 90.00th=[ 165], 95.00th=[ 211], 00:19:17.340 | 99.00th=[ 735], 99.50th=[ 835], 99.90th=[ 902], 99.95th=[ 911], 00:19:17.340 | 99.99th=[ 1250] 00:19:17.340 bw ( KiB/s): min=13824, max=245248, per=7.24%, avg=137728.00, stdev=64701.85, samples=20 00:19:17.340 iops : min= 54, max= 958, avg=538.00, stdev=252.74, samples=20 00:19:17.340 lat (msec) : 4=0.02%, 10=0.18%, 20=0.73%, 50=7.53%, 100=46.38% 00:19:17.340 lat (msec) : 250=41.53%, 500=1.30%, 750=1.32%, 1000=0.97%, 2000=0.02% 00:19:17.340 cpu : usr=0.30%, sys=1.62%, ctx=1576, majf=0, minf=3721 00:19:17.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:17.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.340 issued rwts: total=5444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.340 job5: (groupid=0, jobs=1): err= 0: pid=1154365: Sat Jul 13 06:14:22 2024 00:19:17.340 read: IOPS=980, BW=245MiB/s (257MB/s)(2487MiB/10148msec) 00:19:17.340 slat (usec): min=12, max=778077, avg=940.60, stdev=8504.69 00:19:17.340 clat (msec): min=2, max=1100, avg=64.29, stdev=86.42 00:19:17.340 lat (msec): min=2, max=1218, avg=65.23, stdev=87.36 00:19:17.340 clat percentiles (msec): 00:19:17.340 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 29], 20.00th=[ 37], 00:19:17.340 | 30.00th=[ 42], 40.00th=[ 46], 50.00th=[ 51], 60.00th=[ 57], 00:19:17.340 | 70.00th=[ 64], 80.00th=[ 74], 90.00th=[ 91], 95.00th=[ 125], 00:19:17.340 | 99.00th=[ 460], 99.50th=[ 793], 99.90th=[ 1099], 99.95th=[ 1099], 00:19:17.340 | 99.99th=[ 1099] 00:19:17.340 bw ( KiB/s): min=13824, max=414720, per=13.31%, avg=253056.00, stdev=106418.22, samples=20 00:19:17.340 iops : min= 54, max= 1620, avg=988.50, stdev=415.70, samples=20 00:19:17.340 lat (msec) : 4=1.16%, 10=3.23%, 20=1.84%, 50=42.15%, 100=43.39% 00:19:17.340 lat (msec) : 250=6.89%, 500=0.52%, 750=0.20%, 1000=0.27%, 2000=0.36% 00:19:17.340 cpu : usr=0.59%, sys=3.55%, ctx=2083, majf=0, minf=4097 00:19:17.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:17.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.340 issued rwts: total=9948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.340 job6: (groupid=0, jobs=1): err= 0: pid=1154366: Sat Jul 13 06:14:22 2024 00:19:17.340 read: IOPS=700, BW=175MiB/s (184MB/s)(1765MiB/10074msec) 00:19:17.340 slat (usec): min=14, max=250576, avg=1328.22, stdev=5612.37 00:19:17.340 clat (usec): min=1942, max=992299, avg=89904.03, stdev=93849.59 00:19:17.340 lat (usec): min=1984, max=992366, avg=91232.25, stdev=95119.19 00:19:17.340 clat percentiles (msec): 00:19:17.340 | 1.00th=[ 9], 5.00th=[ 34], 10.00th=[ 43], 20.00th=[ 53], 00:19:17.340 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 75], 60.00th=[ 85], 00:19:17.340 | 70.00th=[ 95], 80.00th=[ 104], 90.00th=[ 121], 95.00th=[ 146], 00:19:17.340 | 99.00th=[ 558], 99.50th=[ 877], 99.90th=[ 961], 99.95th=[ 995], 00:19:17.340 | 99.99th=[ 995] 00:19:17.340 bw ( KiB/s): min= 7168, max=322560, per=9.42%, avg=179148.80, stdev=83731.37, samples=20 00:19:17.340 iops : min= 28, max= 1260, avg=699.80, stdev=327.08, samples=20 00:19:17.340 lat (msec) : 2=0.01%, 4=0.24%, 10=1.03%, 20=1.22%, 50=14.06% 00:19:17.340 lat (msec) : 100=59.75%, 250=21.34%, 500=0.85%, 750=0.69%, 1000=0.79% 00:19:17.340 cpu : usr=0.57%, sys=2.54%, ctx=1543, majf=0, minf=4097 00:19:17.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:17.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.340 issued rwts: total=7061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.341 job7: (groupid=0, jobs=1): err= 0: pid=1154367: Sat Jul 13 06:14:22 2024 00:19:17.341 read: IOPS=523, BW=131MiB/s (137MB/s)(1328MiB/10145msec) 00:19:17.341 slat (usec): min=9, max=131758, avg=1195.10, stdev=4819.83 00:19:17.341 clat (usec): min=934, max=1044.5k, avg=120980.36, stdev=109974.46 00:19:17.341 lat (usec): min=953, max=1176.0k, avg=122175.46, stdev=110553.22 00:19:17.341 clat percentiles (msec): 00:19:17.341 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 13], 20.00th=[ 36], 00:19:17.341 | 30.00th=[ 90], 40.00th=[ 105], 50.00th=[ 116], 60.00th=[ 128], 00:19:17.341 | 70.00th=[ 138], 80.00th=[ 153], 90.00th=[ 192], 95.00th=[ 226], 00:19:17.341 | 99.00th=[ 634], 99.50th=[ 986], 99.90th=[ 1045], 99.95th=[ 1045], 00:19:17.341 | 99.99th=[ 1045] 00:19:17.341 bw ( KiB/s): min=30720, max=315904, per=7.07%, avg=134323.20, stdev=55482.14, samples=20 00:19:17.341 iops : min= 120, max= 1234, avg=524.70, stdev=216.73, samples=20 00:19:17.341 lat (usec) : 1000=0.02% 00:19:17.341 lat (msec) : 2=0.06%, 4=0.72%, 10=4.88%, 20=12.47%, 50=3.03% 00:19:17.341 lat (msec) : 100=14.60%, 250=60.56%, 500=2.22%, 750=0.92%, 1000=0.21% 00:19:17.341 lat (msec) : 2000=0.32% 00:19:17.341 cpu : usr=0.18%, sys=1.82%, ctx=1458, majf=0, minf=4097 00:19:17.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:17.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.341 issued rwts: total=5310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.341 job8: (groupid=0, jobs=1): err= 0: pid=1154368: Sat Jul 13 06:14:22 2024 00:19:17.341 read: IOPS=904, BW=226MiB/s (237MB/s)(2266MiB/10024msec) 00:19:17.341 slat (usec): min=9, max=68997, avg=1000.57, stdev=3365.17 00:19:17.341 clat (msec): min=4, max=270, avg=69.73, stdev=44.61 00:19:17.341 lat (msec): min=6, max=270, avg=70.73, stdev=45.23 00:19:17.341 clat percentiles (msec): 00:19:17.341 | 1.00th=[ 18], 5.00th=[ 27], 10.00th=[ 28], 20.00th=[ 30], 00:19:17.341 | 30.00th=[ 34], 40.00th=[ 48], 50.00th=[ 56], 60.00th=[ 69], 00:19:17.341 | 70.00th=[ 90], 80.00th=[ 109], 90.00th=[ 131], 95.00th=[ 148], 00:19:17.341 | 99.00th=[ 209], 99.50th=[ 222], 99.90th=[ 234], 99.95th=[ 247], 00:19:17.341 | 99.99th=[ 271] 00:19:17.341 bw ( KiB/s): min=72704, max=568320, per=12.12%, avg=230425.60, stdev=137485.16, samples=20 00:19:17.341 iops : min= 284, max= 2220, avg=900.10, stdev=537.05, samples=20 00:19:17.341 lat (msec) : 10=0.24%, 20=0.98%, 50=41.44%, 100=32.71%, 250=24.61% 00:19:17.341 lat (msec) : 500=0.01% 00:19:17.341 cpu : usr=0.54%, sys=3.20%, ctx=1803, majf=0, minf=4097 00:19:17.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:17.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.341 issued rwts: total=9064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.341 job9: (groupid=0, jobs=1): err= 0: pid=1154369: Sat Jul 13 06:14:22 2024 00:19:17.341 read: IOPS=670, BW=168MiB/s (176MB/s)(1701MiB/10149msec) 00:19:17.341 slat (usec): min=13, max=198126, avg=1336.92, stdev=5114.00 00:19:17.341 clat (msec): min=10, max=989, avg=94.04, stdev=102.51 00:19:17.341 lat (msec): min=10, max=989, avg=95.38, stdev=103.19 00:19:17.341 clat percentiles (msec): 00:19:17.341 | 1.00th=[ 27], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 34], 00:19:17.341 | 30.00th=[ 56], 40.00th=[ 66], 50.00th=[ 74], 60.00th=[ 83], 00:19:17.341 | 70.00th=[ 90], 80.00th=[ 121], 90.00th=[ 150], 95.00th=[ 182], 00:19:17.341 | 99.00th=[ 634], 99.50th=[ 793], 99.90th=[ 969], 99.95th=[ 969], 00:19:17.341 | 99.99th=[ 986] 00:19:17.341 bw ( KiB/s): min=19456, max=509952, per=9.08%, avg=172569.60, stdev=120192.65, samples=20 00:19:17.341 iops : min= 76, max= 1992, avg=674.10, stdev=469.50, samples=20 00:19:17.341 lat (msec) : 20=0.19%, 50=26.88%, 100=47.42%, 250=22.48%, 500=0.87% 00:19:17.341 lat (msec) : 750=1.38%, 1000=0.78% 00:19:17.341 cpu : usr=0.44%, sys=2.36%, ctx=1492, majf=0, minf=4097 00:19:17.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:17.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.341 issued rwts: total=6805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.341 job10: (groupid=0, jobs=1): err= 0: pid=1154370: Sat Jul 13 06:14:22 2024 00:19:17.341 read: IOPS=788, BW=197MiB/s (207MB/s)(1986MiB/10067msec) 00:19:17.341 slat (usec): min=9, max=336829, avg=613.85, stdev=5759.59 00:19:17.341 clat (usec): min=805, max=1045.3k, avg=80439.37, stdev=95187.68 00:19:17.341 lat (usec): min=827, max=1045.4k, avg=81053.22, stdev=95631.54 00:19:17.341 clat percentiles (usec): 00:19:17.341 | 1.00th=[ 1450], 5.00th=[ 2073], 10.00th=[ 4817], 00:19:17.341 | 20.00th=[ 17171], 30.00th=[ 43254], 40.00th=[ 55837], 00:19:17.341 | 50.00th=[ 68682], 60.00th=[ 82314], 70.00th=[ 95945], 00:19:17.341 | 80.00th=[ 111674], 90.00th=[ 141558], 95.00th=[ 170918], 00:19:17.341 | 99.00th=[ 608175], 99.50th=[ 851444], 99.90th=[ 926942], 00:19:17.341 | 99.95th=[ 960496], 99.99th=[1044382] 00:19:17.341 bw ( KiB/s): min=112128, max=339968, per=10.61%, avg=201719.60, stdev=69298.27, samples=20 00:19:17.341 iops : min= 438, max= 1328, avg=787.95, stdev=270.70, samples=20 00:19:17.341 lat (usec) : 1000=0.01% 00:19:17.341 lat (msec) : 2=4.68%, 4=3.61%, 10=5.96%, 20=7.04%, 50=13.17% 00:19:17.341 lat (msec) : 100=38.42%, 250=25.47%, 500=0.37%, 750=0.59%, 1000=0.64% 00:19:17.341 lat (msec) : 2000=0.04% 00:19:17.341 cpu : usr=0.39%, sys=2.42%, ctx=2598, majf=0, minf=4097 00:19:17.341 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:17.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.341 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:17.341 issued rwts: total=7942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:17.341 00:19:17.341 Run status group 0 (all jobs): 00:19:17.341 READ: bw=1857MiB/s (1947MB/s), 117MiB/s-245MiB/s (123MB/s-257MB/s), io=18.4GiB (19.8GB), run=10024-10155msec 00:19:17.341 00:19:17.341 Disk stats (read/write): 00:19:17.341 nvme0n1: ios=9903/0, merge=0/0, ticks=1220567/0, in_queue=1220567, util=97.13% 00:19:17.341 nvme10n1: ios=9386/0, merge=0/0, ticks=1227978/0, in_queue=1227978, util=97.35% 00:19:17.341 nvme1n1: ios=11973/0, merge=0/0, ticks=1214926/0, in_queue=1214926, util=97.65% 00:19:17.341 nvme2n1: ios=15852/0, merge=0/0, ticks=1209669/0, in_queue=1209669, util=97.79% 00:19:17.341 nvme3n1: ios=10719/0, merge=0/0, ticks=1214806/0, in_queue=1214806, util=97.88% 00:19:17.341 nvme4n1: ios=19464/0, merge=0/0, ticks=1217793/0, in_queue=1217793, util=98.23% 00:19:17.341 nvme5n1: ios=13776/0, merge=0/0, ticks=1224292/0, in_queue=1224292, util=98.40% 00:19:17.341 nvme6n1: ios=10347/0, merge=0/0, ticks=1234215/0, in_queue=1234215, util=98.51% 00:19:17.341 nvme7n1: ios=17868/0, merge=0/0, ticks=1235670/0, in_queue=1235670, util=98.92% 00:19:17.341 nvme8n1: ios=13458/0, merge=0/0, ticks=1203573/0, in_queue=1203573, util=99.11% 00:19:17.341 nvme9n1: ios=15605/0, merge=0/0, ticks=1241935/0, in_queue=1241935, util=99.22% 00:19:17.341 06:14:22 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:17.341 [global] 00:19:17.341 thread=1 00:19:17.341 invalidate=1 00:19:17.341 rw=randwrite 00:19:17.341 time_based=1 00:19:17.341 runtime=10 00:19:17.341 ioengine=libaio 00:19:17.341 direct=1 00:19:17.341 bs=262144 00:19:17.341 iodepth=64 00:19:17.341 norandommap=1 00:19:17.341 numjobs=1 00:19:17.341 00:19:17.341 [job0] 00:19:17.341 filename=/dev/nvme0n1 00:19:17.341 [job1] 00:19:17.341 filename=/dev/nvme10n1 00:19:17.341 [job2] 00:19:17.341 filename=/dev/nvme1n1 00:19:17.341 [job3] 00:19:17.341 filename=/dev/nvme2n1 00:19:17.341 [job4] 00:19:17.341 filename=/dev/nvme3n1 00:19:17.341 [job5] 00:19:17.341 filename=/dev/nvme4n1 00:19:17.341 [job6] 00:19:17.341 filename=/dev/nvme5n1 00:19:17.341 [job7] 00:19:17.341 filename=/dev/nvme6n1 00:19:17.341 [job8] 00:19:17.341 filename=/dev/nvme7n1 00:19:17.341 [job9] 00:19:17.341 filename=/dev/nvme8n1 00:19:17.341 [job10] 00:19:17.341 filename=/dev/nvme9n1 00:19:17.341 Could not set queue depth (nvme0n1) 00:19:17.341 Could not set queue depth (nvme10n1) 00:19:17.341 Could not set queue depth (nvme1n1) 00:19:17.341 Could not set queue depth (nvme2n1) 00:19:17.341 Could not set queue depth (nvme3n1) 00:19:17.341 Could not set queue depth (nvme4n1) 00:19:17.341 Could not set queue depth (nvme5n1) 00:19:17.341 Could not set queue depth (nvme6n1) 00:19:17.341 Could not set queue depth (nvme7n1) 00:19:17.341 Could not set queue depth (nvme8n1) 00:19:17.341 Could not set queue depth (nvme9n1) 00:19:17.341 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.341 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.341 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.341 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.341 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.341 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.341 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.341 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.341 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.341 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.341 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:17.341 fio-3.35 00:19:17.341 Starting 11 threads 00:19:27.328 00:19:27.328 job0: (groupid=0, jobs=1): err= 0: pid=1155416: Sat Jul 13 06:14:32 2024 00:19:27.328 write: IOPS=246, BW=61.7MiB/s (64.7MB/s)(625MiB/10125msec); 0 zone resets 00:19:27.328 slat (usec): min=24, max=117943, avg=3823.05, stdev=9572.53 00:19:27.328 clat (usec): min=1512, max=679462, avg=255378.51, stdev=149525.41 00:19:27.328 lat (usec): min=1562, max=679501, avg=259201.56, stdev=151620.48 00:19:27.328 clat percentiles (msec): 00:19:27.328 | 1.00th=[ 26], 5.00th=[ 69], 10.00th=[ 77], 20.00th=[ 144], 00:19:27.328 | 30.00th=[ 171], 40.00th=[ 190], 50.00th=[ 209], 60.00th=[ 253], 00:19:27.328 | 70.00th=[ 313], 80.00th=[ 384], 90.00th=[ 477], 95.00th=[ 567], 00:19:27.328 | 99.00th=[ 659], 99.50th=[ 659], 99.90th=[ 676], 99.95th=[ 676], 00:19:27.328 | 99.99th=[ 676] 00:19:27.328 bw ( KiB/s): min=24576, max=126976, per=4.58%, avg=62361.60, stdev=33161.68, samples=20 00:19:27.328 iops : min= 96, max= 496, avg=243.60, stdev=129.54, samples=20 00:19:27.328 lat (msec) : 2=0.20%, 4=0.32%, 20=0.32%, 50=2.08%, 100=11.96% 00:19:27.328 lat (msec) : 250=44.82%, 500=32.05%, 750=8.24% 00:19:27.328 cpu : usr=0.68%, sys=0.86%, ctx=841, majf=0, minf=1 00:19:27.328 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:19:27.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.328 issued rwts: total=0,2499,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.328 job1: (groupid=0, jobs=1): err= 0: pid=1155428: Sat Jul 13 06:14:32 2024 00:19:27.328 write: IOPS=533, BW=133MiB/s (140MB/s)(1357MiB/10174msec); 0 zone resets 00:19:27.328 slat (usec): min=16, max=212614, avg=1388.78, stdev=6446.85 00:19:27.328 clat (msec): min=2, max=782, avg=118.38, stdev=124.31 00:19:27.328 lat (msec): min=2, max=782, avg=119.77, stdev=125.73 00:19:27.328 clat percentiles (msec): 00:19:27.328 | 1.00th=[ 9], 5.00th=[ 31], 10.00th=[ 41], 20.00th=[ 44], 00:19:27.328 | 30.00th=[ 60], 40.00th=[ 68], 50.00th=[ 78], 60.00th=[ 89], 00:19:27.328 | 70.00th=[ 121], 80.00th=[ 142], 90.00th=[ 236], 95.00th=[ 409], 00:19:27.328 | 99.00th=[ 735], 99.50th=[ 760], 99.90th=[ 785], 99.95th=[ 785], 00:19:27.328 | 99.99th=[ 785] 00:19:27.328 bw ( KiB/s): min=14336, max=382976, per=10.08%, avg=137344.00, stdev=92192.76, samples=20 00:19:27.328 iops : min= 56, max= 1496, avg=536.50, stdev=360.13, samples=20 00:19:27.328 lat (msec) : 4=0.20%, 10=0.94%, 20=2.10%, 50=21.94%, 100=39.11% 00:19:27.328 lat (msec) : 250=26.38%, 500=6.69%, 750=1.86%, 1000=0.77% 00:19:27.328 cpu : usr=1.65%, sys=1.68%, ctx=2406, majf=0, minf=1 00:19:27.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:27.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.328 issued rwts: total=0,5428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.328 job2: (groupid=0, jobs=1): err= 0: pid=1155429: Sat Jul 13 06:14:32 2024 00:19:27.328 write: IOPS=572, BW=143MiB/s (150MB/s)(1449MiB/10125msec); 0 zone resets 00:19:27.328 slat (usec): min=15, max=133261, avg=951.89, stdev=5384.72 00:19:27.328 clat (usec): min=1355, max=682299, avg=110783.90, stdev=124610.70 00:19:27.328 lat (usec): min=1403, max=689995, avg=111735.78, stdev=125929.89 00:19:27.328 clat percentiles (msec): 00:19:27.328 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 12], 20.00th=[ 21], 00:19:27.328 | 30.00th=[ 34], 40.00th=[ 54], 50.00th=[ 70], 60.00th=[ 86], 00:19:27.328 | 70.00th=[ 106], 80.00th=[ 178], 90.00th=[ 313], 95.00th=[ 388], 00:19:27.328 | 99.00th=[ 575], 99.50th=[ 617], 99.90th=[ 642], 99.95th=[ 667], 00:19:27.328 | 99.99th=[ 684] 00:19:27.328 bw ( KiB/s): min=22528, max=323584, per=10.78%, avg=146805.45, stdev=83648.39, samples=20 00:19:27.328 iops : min= 88, max= 1264, avg=573.45, stdev=326.75, samples=20 00:19:27.328 lat (msec) : 2=0.12%, 4=1.64%, 10=6.56%, 20=10.88%, 50=19.01% 00:19:27.328 lat (msec) : 100=28.53%, 250=20.15%, 500=10.83%, 750=2.28% 00:19:27.328 cpu : usr=1.58%, sys=2.07%, ctx=4403, majf=0, minf=1 00:19:27.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:27.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.328 issued rwts: total=0,5797,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.328 job3: (groupid=0, jobs=1): err= 0: pid=1155430: Sat Jul 13 06:14:32 2024 00:19:27.328 write: IOPS=330, BW=82.6MiB/s (86.7MB/s)(833MiB/10074msec); 0 zone resets 00:19:27.328 slat (usec): min=19, max=194116, avg=1968.90, stdev=7524.85 00:19:27.328 clat (usec): min=1181, max=667182, avg=191431.61, stdev=152730.92 00:19:27.328 lat (usec): min=1270, max=714181, avg=193400.51, stdev=154392.75 00:19:27.328 clat percentiles (msec): 00:19:27.328 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 19], 20.00th=[ 47], 00:19:27.328 | 30.00th=[ 75], 40.00th=[ 136], 50.00th=[ 163], 60.00th=[ 194], 00:19:27.328 | 70.00th=[ 234], 80.00th=[ 347], 90.00th=[ 422], 95.00th=[ 489], 00:19:27.328 | 99.00th=[ 617], 99.50th=[ 634], 99.90th=[ 659], 99.95th=[ 667], 00:19:27.328 | 99.99th=[ 667] 00:19:27.328 bw ( KiB/s): min=22528, max=216576, per=6.14%, avg=83645.85, stdev=49106.49, samples=20 00:19:27.328 iops : min= 88, max= 846, avg=326.70, stdev=191.80, samples=20 00:19:27.328 lat (msec) : 2=0.27%, 4=0.72%, 10=3.33%, 20=6.79%, 50=9.70% 00:19:27.328 lat (msec) : 100=14.50%, 250=37.00%, 500=23.36%, 750=4.32% 00:19:27.328 cpu : usr=1.07%, sys=1.23%, ctx=2146, majf=0, minf=1 00:19:27.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:19:27.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.328 issued rwts: total=0,3330,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.328 job4: (groupid=0, jobs=1): err= 0: pid=1155431: Sat Jul 13 06:14:32 2024 00:19:27.328 write: IOPS=445, BW=111MiB/s (117MB/s)(1124MiB/10095msec); 0 zone resets 00:19:27.328 slat (usec): min=21, max=217710, avg=1846.01, stdev=6678.23 00:19:27.328 clat (msec): min=2, max=660, avg=141.72, stdev=140.50 00:19:27.328 lat (msec): min=3, max=661, avg=143.57, stdev=142.43 00:19:27.328 clat percentiles (msec): 00:19:27.328 | 1.00th=[ 9], 5.00th=[ 15], 10.00th=[ 21], 20.00th=[ 44], 00:19:27.328 | 30.00th=[ 62], 40.00th=[ 75], 50.00th=[ 86], 60.00th=[ 128], 00:19:27.328 | 70.00th=[ 155], 80.00th=[ 194], 90.00th=[ 388], 95.00th=[ 481], 00:19:27.328 | 99.00th=[ 609], 99.50th=[ 625], 99.90th=[ 659], 99.95th=[ 659], 00:19:27.328 | 99.99th=[ 659] 00:19:27.328 bw ( KiB/s): min=26624, max=244224, per=8.33%, avg=113467.85, stdev=72672.05, samples=20 00:19:27.328 iops : min= 104, max= 954, avg=443.20, stdev=283.89, samples=20 00:19:27.328 lat (msec) : 4=0.07%, 10=1.42%, 20=8.16%, 50=14.30%, 100=30.41% 00:19:27.328 lat (msec) : 250=32.95%, 500=8.34%, 750=4.34% 00:19:27.328 cpu : usr=1.35%, sys=1.82%, ctx=2361, majf=0, minf=1 00:19:27.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:27.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.328 issued rwts: total=0,4495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.328 job5: (groupid=0, jobs=1): err= 0: pid=1155432: Sat Jul 13 06:14:32 2024 00:19:27.328 write: IOPS=586, BW=147MiB/s (154MB/s)(1481MiB/10102msec); 0 zone resets 00:19:27.328 slat (usec): min=23, max=303799, avg=1018.94, stdev=5499.22 00:19:27.328 clat (usec): min=1903, max=836926, avg=107810.75, stdev=110898.58 00:19:27.328 lat (msec): min=2, max=851, avg=108.83, stdev=111.38 00:19:27.328 clat percentiles (msec): 00:19:27.328 | 1.00th=[ 8], 5.00th=[ 17], 10.00th=[ 28], 20.00th=[ 44], 00:19:27.328 | 30.00th=[ 62], 40.00th=[ 71], 50.00th=[ 74], 60.00th=[ 80], 00:19:27.328 | 70.00th=[ 107], 80.00th=[ 142], 90.00th=[ 218], 95.00th=[ 292], 00:19:27.328 | 99.00th=[ 693], 99.50th=[ 793], 99.90th=[ 827], 99.95th=[ 835], 00:19:27.328 | 99.99th=[ 835] 00:19:27.328 bw ( KiB/s): min=50688, max=322048, per=11.01%, avg=150006.75, stdev=68791.02, samples=20 00:19:27.328 iops : min= 198, max= 1258, avg=585.95, stdev=268.71, samples=20 00:19:27.328 lat (msec) : 2=0.02%, 4=0.17%, 10=1.45%, 20=4.98%, 50=20.50% 00:19:27.328 lat (msec) : 100=40.75%, 250=25.16%, 500=5.42%, 750=0.74%, 1000=0.81% 00:19:27.328 cpu : usr=1.93%, sys=2.02%, ctx=3372, majf=0, minf=1 00:19:27.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:27.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.328 issued rwts: total=0,5922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.328 job6: (groupid=0, jobs=1): err= 0: pid=1155434: Sat Jul 13 06:14:32 2024 00:19:27.328 write: IOPS=470, BW=118MiB/s (123MB/s)(1185MiB/10074msec); 0 zone resets 00:19:27.328 slat (usec): min=20, max=216688, avg=1451.84, stdev=5608.43 00:19:27.328 clat (usec): min=1216, max=721499, avg=134320.93, stdev=96288.15 00:19:27.328 lat (usec): min=1256, max=732920, avg=135772.77, stdev=97354.85 00:19:27.328 clat percentiles (msec): 00:19:27.328 | 1.00th=[ 9], 5.00th=[ 29], 10.00th=[ 50], 20.00th=[ 72], 00:19:27.328 | 30.00th=[ 77], 40.00th=[ 81], 50.00th=[ 92], 60.00th=[ 132], 00:19:27.328 | 70.00th=[ 182], 80.00th=[ 207], 90.00th=[ 239], 95.00th=[ 292], 00:19:27.328 | 99.00th=[ 502], 99.50th=[ 592], 99.90th=[ 709], 99.95th=[ 718], 00:19:27.328 | 99.99th=[ 726] 00:19:27.328 bw ( KiB/s): min=30720, max=260608, per=8.79%, avg=119756.80, stdev=61126.14, samples=20 00:19:27.328 iops : min= 120, max= 1018, avg=467.80, stdev=238.77, samples=20 00:19:27.328 lat (msec) : 2=0.11%, 4=0.36%, 10=0.82%, 20=2.00%, 50=6.86% 00:19:27.328 lat (msec) : 100=44.40%, 250=37.29%, 500=7.13%, 750=1.03% 00:19:27.328 cpu : usr=1.39%, sys=1.61%, ctx=2641, majf=0, minf=1 00:19:27.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:27.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.328 issued rwts: total=0,4741,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.328 job7: (groupid=0, jobs=1): err= 0: pid=1155435: Sat Jul 13 06:14:32 2024 00:19:27.328 write: IOPS=788, BW=197MiB/s (207MB/s)(2006MiB/10177msec); 0 zone resets 00:19:27.328 slat (usec): min=17, max=97867, avg=804.18, stdev=3118.85 00:19:27.328 clat (usec): min=899, max=511688, avg=80321.80, stdev=86945.36 00:19:27.328 lat (usec): min=920, max=511731, avg=81125.98, stdev=87734.03 00:19:27.328 clat percentiles (usec): 00:19:27.328 | 1.00th=[ 1827], 5.00th=[ 11994], 10.00th=[ 19530], 20.00th=[ 38011], 00:19:27.328 | 30.00th=[ 40109], 40.00th=[ 41157], 50.00th=[ 42206], 60.00th=[ 44827], 00:19:27.328 | 70.00th=[ 64226], 80.00th=[109577], 90.00th=[219153], 95.00th=[270533], 00:19:27.328 | 99.00th=[417334], 99.50th=[438305], 99.90th=[467665], 99.95th=[488637], 00:19:27.328 | 99.99th=[509608] 00:19:27.328 bw ( KiB/s): min=32768, max=396800, per=14.97%, avg=203816.85, stdev=122877.42, samples=20 00:19:27.328 iops : min= 128, max= 1550, avg=796.15, stdev=479.99, samples=20 00:19:27.328 lat (usec) : 1000=0.36% 00:19:27.328 lat (msec) : 2=0.71%, 4=0.67%, 10=2.64%, 20=6.14%, 50=53.10% 00:19:27.328 lat (msec) : 100=14.62%, 250=14.39%, 500=7.32%, 750=0.04% 00:19:27.328 cpu : usr=2.56%, sys=2.97%, ctx=4234, majf=0, minf=1 00:19:27.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:27.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.328 issued rwts: total=0,8024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.328 job8: (groupid=0, jobs=1): err= 0: pid=1155436: Sat Jul 13 06:14:32 2024 00:19:27.328 write: IOPS=340, BW=85.2MiB/s (89.3MB/s)(867MiB/10173msec); 0 zone resets 00:19:27.328 slat (usec): min=23, max=140471, avg=2274.56, stdev=6135.28 00:19:27.328 clat (usec): min=1010, max=499175, avg=185292.49, stdev=99946.44 00:19:27.328 lat (usec): min=1037, max=499214, avg=187567.04, stdev=101131.87 00:19:27.328 clat percentiles (msec): 00:19:27.328 | 1.00th=[ 6], 5.00th=[ 15], 10.00th=[ 34], 20.00th=[ 106], 00:19:27.328 | 30.00th=[ 148], 40.00th=[ 167], 50.00th=[ 186], 60.00th=[ 205], 00:19:27.328 | 70.00th=[ 224], 80.00th=[ 247], 90.00th=[ 330], 95.00th=[ 368], 00:19:27.328 | 99.00th=[ 447], 99.50th=[ 468], 99.90th=[ 489], 99.95th=[ 502], 00:19:27.328 | 99.99th=[ 502] 00:19:27.328 bw ( KiB/s): min=34816, max=195072, per=6.40%, avg=87148.35, stdev=36308.53, samples=20 00:19:27.328 iops : min= 136, max= 762, avg=340.40, stdev=141.85, samples=20 00:19:27.328 lat (msec) : 2=0.66%, 4=0.06%, 10=2.05%, 20=3.87%, 50=6.32% 00:19:27.328 lat (msec) : 100=6.69%, 250=60.66%, 500=19.70% 00:19:27.328 cpu : usr=0.99%, sys=1.25%, ctx=1791, majf=0, minf=1 00:19:27.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:27.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.328 issued rwts: total=0,3467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.328 job9: (groupid=0, jobs=1): err= 0: pid=1155437: Sat Jul 13 06:14:32 2024 00:19:27.328 write: IOPS=594, BW=149MiB/s (156MB/s)(1513MiB/10171msec); 0 zone resets 00:19:27.328 slat (usec): min=17, max=112768, avg=1399.99, stdev=4205.32 00:19:27.328 clat (usec): min=1227, max=469977, avg=106123.67, stdev=74362.05 00:19:27.328 lat (usec): min=1271, max=470036, avg=107523.66, stdev=75157.23 00:19:27.328 clat percentiles (msec): 00:19:27.328 | 1.00th=[ 5], 5.00th=[ 20], 10.00th=[ 40], 20.00th=[ 47], 00:19:27.328 | 30.00th=[ 68], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 91], 00:19:27.328 | 70.00th=[ 127], 80.00th=[ 169], 90.00th=[ 222], 95.00th=[ 259], 00:19:27.328 | 99.00th=[ 330], 99.50th=[ 384], 99.90th=[ 414], 99.95th=[ 418], 00:19:27.328 | 99.99th=[ 472] 00:19:27.328 bw ( KiB/s): min=52224, max=347136, per=11.26%, avg=153304.50, stdev=77457.45, samples=20 00:19:27.328 iops : min= 204, max= 1356, avg=598.80, stdev=302.59, samples=20 00:19:27.328 lat (msec) : 2=0.17%, 4=0.58%, 10=1.95%, 20=2.35%, 50=16.79% 00:19:27.328 lat (msec) : 100=43.23%, 250=28.79%, 500=6.15% 00:19:27.328 cpu : usr=1.91%, sys=2.01%, ctx=2480, majf=0, minf=1 00:19:27.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:27.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.328 issued rwts: total=0,6051,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.328 job10: (groupid=0, jobs=1): err= 0: pid=1155438: Sat Jul 13 06:14:32 2024 00:19:27.328 write: IOPS=433, BW=108MiB/s (114MB/s)(1097MiB/10131msec); 0 zone resets 00:19:27.328 slat (usec): min=26, max=80476, avg=1867.92, stdev=5515.98 00:19:27.328 clat (msec): min=2, max=622, avg=145.75, stdev=116.46 00:19:27.328 lat (msec): min=2, max=622, avg=147.62, stdev=118.02 00:19:27.328 clat percentiles (msec): 00:19:27.328 | 1.00th=[ 12], 5.00th=[ 37], 10.00th=[ 53], 20.00th=[ 71], 00:19:27.328 | 30.00th=[ 74], 40.00th=[ 81], 50.00th=[ 109], 60.00th=[ 136], 00:19:27.328 | 70.00th=[ 155], 80.00th=[ 199], 90.00th=[ 296], 95.00th=[ 430], 00:19:27.328 | 99.00th=[ 558], 99.50th=[ 567], 99.90th=[ 584], 99.95th=[ 625], 00:19:27.328 | 99.99th=[ 625] 00:19:27.328 bw ( KiB/s): min=26624, max=238592, per=8.13%, avg=110700.55, stdev=63443.55, samples=20 00:19:27.328 iops : min= 104, max= 932, avg=432.40, stdev=247.85, samples=20 00:19:27.328 lat (msec) : 4=0.09%, 10=0.78%, 20=1.73%, 50=6.86%, 100=37.18% 00:19:27.328 lat (msec) : 250=40.94%, 500=9.57%, 750=2.85% 00:19:27.328 cpu : usr=1.33%, sys=1.64%, ctx=1951, majf=0, minf=1 00:19:27.328 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:27.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.328 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:27.328 issued rwts: total=0,4387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.328 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:27.328 00:19:27.328 Run status group 0 (all jobs): 00:19:27.328 WRITE: bw=1330MiB/s (1395MB/s), 61.7MiB/s-197MiB/s (64.7MB/s-207MB/s), io=13.2GiB (14.2GB), run=10074-10177msec 00:19:27.328 00:19:27.328 Disk stats (read/write): 00:19:27.328 nvme0n1: ios=49/4823, merge=0/0, ticks=44/1200103, in_queue=1200147, util=97.31% 00:19:27.329 nvme10n1: ios=45/10692, merge=0/0, ticks=2921/1179710, in_queue=1182631, util=100.00% 00:19:27.329 nvme1n1: ios=0/11462, merge=0/0, ticks=0/1222249, in_queue=1222249, util=97.62% 00:19:27.329 nvme2n1: ios=41/6387, merge=0/0, ticks=1069/1218947, in_queue=1220016, util=100.00% 00:19:27.329 nvme3n1: ios=41/8804, merge=0/0, ticks=1397/1213389, in_queue=1214786, util=100.00% 00:19:27.329 nvme4n1: ios=49/11650, merge=0/0, ticks=3650/1195603, in_queue=1199253, util=100.00% 00:19:27.329 nvme5n1: ios=42/9254, merge=0/0, ticks=1294/1218116, in_queue=1219410, util=100.00% 00:19:27.329 nvme6n1: ios=0/15835, merge=0/0, ticks=0/1216839, in_queue=1216839, util=98.42% 00:19:27.329 nvme7n1: ios=37/6769, merge=0/0, ticks=386/1214325, in_queue=1214711, util=100.00% 00:19:27.329 nvme8n1: ios=0/11907, merge=0/0, ticks=0/1213553, in_queue=1213553, util=98.98% 00:19:27.329 nvme9n1: ios=43/8584, merge=0/0, ticks=1366/1204004, in_queue=1205370, util=100.00% 00:19:27.329 06:14:32 -- target/multiconnection.sh@36 -- # sync 00:19:27.329 06:14:33 -- target/multiconnection.sh@37 -- # seq 1 11 00:19:27.329 06:14:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.329 06:14:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:27.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:27.329 06:14:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:27.329 06:14:33 -- common/autotest_common.sh@1198 -- # local i=0 00:19:27.329 06:14:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:27.329 06:14:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:19:27.329 06:14:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:27.329 06:14:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:19:27.329 06:14:33 -- common/autotest_common.sh@1210 -- # return 0 00:19:27.329 06:14:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:27.329 06:14:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.329 06:14:33 -- common/autotest_common.sh@10 -- # set +x 00:19:27.329 06:14:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.329 06:14:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.329 06:14:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:27.329 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:27.329 06:14:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:27.329 06:14:33 -- common/autotest_common.sh@1198 -- # local i=0 00:19:27.329 06:14:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:19:27.329 06:14:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:27.329 06:14:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:27.329 06:14:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:19:27.329 06:14:33 -- common/autotest_common.sh@1210 -- # return 0 00:19:27.329 06:14:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:27.329 06:14:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.329 06:14:33 -- common/autotest_common.sh@10 -- # set +x 00:19:27.329 06:14:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.329 06:14:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.329 06:14:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:27.587 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:27.587 06:14:33 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:27.587 06:14:33 -- common/autotest_common.sh@1198 -- # local i=0 00:19:27.587 06:14:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:27.587 06:14:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:19:27.587 06:14:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:27.587 06:14:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:19:27.587 06:14:33 -- common/autotest_common.sh@1210 -- # return 0 00:19:27.587 06:14:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:27.587 06:14:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.587 06:14:33 -- common/autotest_common.sh@10 -- # set +x 00:19:27.587 06:14:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.587 06:14:33 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.587 06:14:33 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:27.847 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:27.847 06:14:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:27.847 06:14:34 -- common/autotest_common.sh@1198 -- # local i=0 00:19:27.847 06:14:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:27.847 06:14:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:19:27.847 06:14:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:27.847 06:14:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:19:27.847 06:14:34 -- common/autotest_common.sh@1210 -- # return 0 00:19:27.847 06:14:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:27.847 06:14:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:27.847 06:14:34 -- common/autotest_common.sh@10 -- # set +x 00:19:27.847 06:14:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:27.847 06:14:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:27.847 06:14:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:28.106 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:28.106 06:14:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:28.106 06:14:34 -- common/autotest_common.sh@1198 -- # local i=0 00:19:28.106 06:14:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:28.106 06:14:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:19:28.106 06:14:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:28.106 06:14:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:19:28.106 06:14:34 -- common/autotest_common.sh@1210 -- # return 0 00:19:28.106 06:14:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:28.106 06:14:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.106 06:14:34 -- common/autotest_common.sh@10 -- # set +x 00:19:28.106 06:14:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.106 06:14:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:28.106 06:14:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:28.364 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:28.364 06:14:34 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:28.364 06:14:34 -- common/autotest_common.sh@1198 -- # local i=0 00:19:28.364 06:14:34 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:28.364 06:14:34 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:19:28.364 06:14:34 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:28.364 06:14:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:19:28.364 06:14:34 -- common/autotest_common.sh@1210 -- # return 0 00:19:28.364 06:14:34 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:28.364 06:14:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.364 06:14:34 -- common/autotest_common.sh@10 -- # set +x 00:19:28.364 06:14:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.364 06:14:34 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:28.364 06:14:34 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:28.622 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:28.622 06:14:35 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:28.622 06:14:35 -- common/autotest_common.sh@1198 -- # local i=0 00:19:28.622 06:14:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:28.622 06:14:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:19:28.622 06:14:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:28.622 06:14:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:19:28.622 06:14:35 -- common/autotest_common.sh@1210 -- # return 0 00:19:28.622 06:14:35 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:28.622 06:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.622 06:14:35 -- common/autotest_common.sh@10 -- # set +x 00:19:28.622 06:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.622 06:14:35 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:28.622 06:14:35 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:28.622 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:28.622 06:14:35 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:28.622 06:14:35 -- common/autotest_common.sh@1198 -- # local i=0 00:19:28.622 06:14:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:28.622 06:14:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:19:28.622 06:14:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:28.622 06:14:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:19:28.880 06:14:35 -- common/autotest_common.sh@1210 -- # return 0 00:19:28.880 06:14:35 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:28.880 06:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.880 06:14:35 -- common/autotest_common.sh@10 -- # set +x 00:19:28.880 06:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.880 06:14:35 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:28.880 06:14:35 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:28.880 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:28.880 06:14:35 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:28.880 06:14:35 -- common/autotest_common.sh@1198 -- # local i=0 00:19:28.880 06:14:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:28.880 06:14:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:19:28.880 06:14:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:28.880 06:14:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:19:28.880 06:14:35 -- common/autotest_common.sh@1210 -- # return 0 00:19:28.880 06:14:35 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:28.880 06:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:28.880 06:14:35 -- common/autotest_common.sh@10 -- # set +x 00:19:28.880 06:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:28.880 06:14:35 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:28.880 06:14:35 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:29.138 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:29.138 06:14:35 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:29.138 06:14:35 -- common/autotest_common.sh@1198 -- # local i=0 00:19:29.138 06:14:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:29.138 06:14:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:19:29.138 06:14:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:29.138 06:14:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:19:29.138 06:14:35 -- common/autotest_common.sh@1210 -- # return 0 00:19:29.138 06:14:35 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:29.138 06:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.138 06:14:35 -- common/autotest_common.sh@10 -- # set +x 00:19:29.138 06:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.138 06:14:35 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:29.138 06:14:35 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:29.138 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:29.138 06:14:35 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:29.138 06:14:35 -- common/autotest_common.sh@1198 -- # local i=0 00:19:29.138 06:14:35 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:29.138 06:14:35 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:19:29.138 06:14:35 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:29.138 06:14:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:19:29.138 06:14:35 -- common/autotest_common.sh@1210 -- # return 0 00:19:29.138 06:14:35 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:29.138 06:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.138 06:14:35 -- common/autotest_common.sh@10 -- # set +x 00:19:29.138 06:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.138 06:14:35 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:29.138 06:14:35 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:29.138 06:14:35 -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:29.138 06:14:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:29.138 06:14:35 -- nvmf/common.sh@116 -- # sync 00:19:29.138 06:14:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:29.138 06:14:35 -- nvmf/common.sh@119 -- # set +e 00:19:29.138 06:14:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:29.138 06:14:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:29.138 rmmod nvme_tcp 00:19:29.138 rmmod nvme_fabrics 00:19:29.138 rmmod nvme_keyring 00:19:29.396 06:14:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:29.396 06:14:35 -- nvmf/common.sh@123 -- # set -e 00:19:29.396 06:14:35 -- nvmf/common.sh@124 -- # return 0 00:19:29.396 06:14:35 -- nvmf/common.sh@477 -- # '[' -n 1149982 ']' 00:19:29.396 06:14:35 -- nvmf/common.sh@478 -- # killprocess 1149982 00:19:29.396 06:14:35 -- common/autotest_common.sh@926 -- # '[' -z 1149982 ']' 00:19:29.396 06:14:35 -- common/autotest_common.sh@930 -- # kill -0 1149982 00:19:29.396 06:14:35 -- common/autotest_common.sh@931 -- # uname 00:19:29.396 06:14:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:29.396 06:14:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1149982 00:19:29.396 06:14:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:29.396 06:14:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:29.396 06:14:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1149982' 00:19:29.396 killing process with pid 1149982 00:19:29.396 06:14:35 -- common/autotest_common.sh@945 -- # kill 1149982 00:19:29.396 06:14:35 -- common/autotest_common.sh@950 -- # wait 1149982 00:19:29.963 06:14:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:29.963 06:14:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:29.963 06:14:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:29.963 06:14:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:29.963 06:14:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:29.963 06:14:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.963 06:14:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:29.963 06:14:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.868 06:14:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:31.868 00:19:31.868 real 1m1.079s 00:19:31.868 user 3m20.592s 00:19:31.868 sys 0m25.622s 00:19:31.868 06:14:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:31.868 06:14:38 -- common/autotest_common.sh@10 -- # set +x 00:19:31.868 ************************************ 00:19:31.868 END TEST nvmf_multiconnection 00:19:31.868 ************************************ 00:19:31.868 06:14:38 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:31.868 06:14:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:31.868 06:14:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:31.868 06:14:38 -- common/autotest_common.sh@10 -- # set +x 00:19:31.868 ************************************ 00:19:31.868 START TEST nvmf_initiator_timeout 00:19:31.868 ************************************ 00:19:31.868 06:14:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:31.868 * Looking for test storage... 00:19:32.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:32.126 06:14:38 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.127 06:14:38 -- nvmf/common.sh@7 -- # uname -s 00:19:32.127 06:14:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.127 06:14:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.127 06:14:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.127 06:14:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.127 06:14:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.127 06:14:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.127 06:14:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.127 06:14:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.127 06:14:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.127 06:14:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.127 06:14:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.127 06:14:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.127 06:14:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.127 06:14:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.127 06:14:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.127 06:14:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.127 06:14:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.127 06:14:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.127 06:14:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.127 06:14:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.127 06:14:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.127 06:14:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.127 06:14:38 -- paths/export.sh@5 -- # export PATH 00:19:32.127 06:14:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.127 06:14:38 -- nvmf/common.sh@46 -- # : 0 00:19:32.127 06:14:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:32.127 06:14:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:32.127 06:14:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:32.127 06:14:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.127 06:14:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.127 06:14:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:32.127 06:14:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:32.127 06:14:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:32.127 06:14:38 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:32.127 06:14:38 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:32.127 06:14:38 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:32.127 06:14:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:32.127 06:14:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.127 06:14:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:32.127 06:14:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:32.127 06:14:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:32.127 06:14:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.127 06:14:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.127 06:14:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.127 06:14:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:32.127 06:14:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:32.127 06:14:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:32.127 06:14:38 -- common/autotest_common.sh@10 -- # set +x 00:19:34.031 06:14:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:34.031 06:14:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:34.031 06:14:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:34.031 06:14:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:34.031 06:14:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:34.031 06:14:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:34.031 06:14:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:34.031 06:14:40 -- nvmf/common.sh@294 -- # net_devs=() 00:19:34.031 06:14:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:34.031 06:14:40 -- nvmf/common.sh@295 -- # e810=() 00:19:34.031 06:14:40 -- nvmf/common.sh@295 -- # local -ga e810 00:19:34.031 06:14:40 -- nvmf/common.sh@296 -- # x722=() 00:19:34.031 06:14:40 -- nvmf/common.sh@296 -- # local -ga x722 00:19:34.031 06:14:40 -- nvmf/common.sh@297 -- # mlx=() 00:19:34.031 06:14:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:34.031 06:14:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:34.031 06:14:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:34.031 06:14:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:34.031 06:14:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:34.031 06:14:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:34.031 06:14:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:34.031 06:14:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:34.031 06:14:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:34.031 06:14:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:34.031 06:14:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:34.031 06:14:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:34.031 06:14:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:34.031 06:14:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:34.031 06:14:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:34.031 06:14:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:34.031 06:14:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:34.031 06:14:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:34.031 06:14:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:34.031 06:14:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:34.031 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:34.031 06:14:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:34.031 06:14:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:34.031 06:14:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.031 06:14:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.031 06:14:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:34.031 06:14:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:34.031 06:14:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:34.031 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:34.031 06:14:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:34.031 06:14:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:34.031 06:14:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:34.031 06:14:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:34.031 06:14:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:34.031 06:14:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:34.031 06:14:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:34.031 06:14:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:34.031 06:14:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:34.031 06:14:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.031 06:14:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:34.031 06:14:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.031 06:14:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:34.032 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:34.032 06:14:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.032 06:14:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:34.032 06:14:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:34.032 06:14:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:34.032 06:14:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:34.032 06:14:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:34.032 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:34.032 06:14:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:34.032 06:14:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:34.032 06:14:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:34.032 06:14:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:34.032 06:14:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:34.032 06:14:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:34.032 06:14:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:34.032 06:14:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:34.032 06:14:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:34.032 06:14:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:34.032 06:14:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:34.032 06:14:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:34.032 06:14:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:34.032 06:14:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:34.032 06:14:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:34.032 06:14:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:34.032 06:14:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:34.032 06:14:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:34.032 06:14:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:34.032 06:14:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:34.032 06:14:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:34.032 06:14:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:34.032 06:14:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:34.032 06:14:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:34.032 06:14:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:34.032 06:14:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:34.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:34.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:19:34.032 00:19:34.032 --- 10.0.0.2 ping statistics --- 00:19:34.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.032 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:19:34.032 06:14:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:34.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:34.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:19:34.032 00:19:34.032 --- 10.0.0.1 ping statistics --- 00:19:34.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:34.032 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:19:34.032 06:14:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:34.032 06:14:40 -- nvmf/common.sh@410 -- # return 0 00:19:34.032 06:14:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:34.032 06:14:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:34.032 06:14:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:34.032 06:14:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:34.032 06:14:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:34.032 06:14:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:34.032 06:14:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:34.032 06:14:40 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:34.032 06:14:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:34.032 06:14:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:34.032 06:14:40 -- common/autotest_common.sh@10 -- # set +x 00:19:34.032 06:14:40 -- nvmf/common.sh@469 -- # nvmfpid=1158800 00:19:34.032 06:14:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:34.032 06:14:40 -- nvmf/common.sh@470 -- # waitforlisten 1158800 00:19:34.032 06:14:40 -- common/autotest_common.sh@819 -- # '[' -z 1158800 ']' 00:19:34.032 06:14:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.032 06:14:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:34.032 06:14:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.032 06:14:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:34.032 06:14:40 -- common/autotest_common.sh@10 -- # set +x 00:19:34.032 [2024-07-13 06:14:40.501965] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:34.032 [2024-07-13 06:14:40.502037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.032 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.292 [2024-07-13 06:14:40.566977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:34.292 [2024-07-13 06:14:40.672657] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:34.293 [2024-07-13 06:14:40.672808] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.293 [2024-07-13 06:14:40.672826] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.293 [2024-07-13 06:14:40.672838] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.293 [2024-07-13 06:14:40.672930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.293 [2024-07-13 06:14:40.673031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.293 [2024-07-13 06:14:40.673080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:34.293 [2024-07-13 06:14:40.673083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.230 06:14:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:35.230 06:14:41 -- common/autotest_common.sh@852 -- # return 0 00:19:35.230 06:14:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:35.230 06:14:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:35.230 06:14:41 -- common/autotest_common.sh@10 -- # set +x 00:19:35.230 06:14:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.230 06:14:41 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:35.230 06:14:41 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:35.230 06:14:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:35.230 06:14:41 -- common/autotest_common.sh@10 -- # set +x 00:19:35.230 Malloc0 00:19:35.230 06:14:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:35.230 06:14:41 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:35.230 06:14:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:35.230 06:14:41 -- common/autotest_common.sh@10 -- # set +x 00:19:35.230 Delay0 00:19:35.230 06:14:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:35.230 06:14:41 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:35.230 06:14:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:35.230 06:14:41 -- common/autotest_common.sh@10 -- # set +x 00:19:35.231 [2024-07-13 06:14:41.549029] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.231 06:14:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:35.231 06:14:41 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:35.231 06:14:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:35.231 06:14:41 -- common/autotest_common.sh@10 -- # set +x 00:19:35.231 06:14:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:35.231 06:14:41 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:35.231 06:14:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:35.231 06:14:41 -- common/autotest_common.sh@10 -- # set +x 00:19:35.231 06:14:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:35.231 06:14:41 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:35.231 06:14:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:35.231 06:14:41 -- common/autotest_common.sh@10 -- # set +x 00:19:35.231 [2024-07-13 06:14:41.577333] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.231 06:14:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:35.231 06:14:41 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:35.798 06:14:42 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:35.798 06:14:42 -- common/autotest_common.sh@1177 -- # local i=0 00:19:35.798 06:14:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:35.798 06:14:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:35.798 06:14:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:38.339 06:14:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:38.339 06:14:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:38.339 06:14:44 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:38.339 06:14:44 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:38.339 06:14:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:38.339 06:14:44 -- common/autotest_common.sh@1187 -- # return 0 00:19:38.339 06:14:44 -- target/initiator_timeout.sh@35 -- # fio_pid=1159247 00:19:38.339 06:14:44 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:38.339 06:14:44 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:38.339 [global] 00:19:38.339 thread=1 00:19:38.339 invalidate=1 00:19:38.339 rw=write 00:19:38.339 time_based=1 00:19:38.339 runtime=60 00:19:38.339 ioengine=libaio 00:19:38.339 direct=1 00:19:38.339 bs=4096 00:19:38.339 iodepth=1 00:19:38.339 norandommap=0 00:19:38.339 numjobs=1 00:19:38.339 00:19:38.339 verify_dump=1 00:19:38.339 verify_backlog=512 00:19:38.339 verify_state_save=0 00:19:38.339 do_verify=1 00:19:38.339 verify=crc32c-intel 00:19:38.339 [job0] 00:19:38.339 filename=/dev/nvme0n1 00:19:38.339 Could not set queue depth (nvme0n1) 00:19:38.339 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:38.339 fio-3.35 00:19:38.339 Starting 1 thread 00:19:40.883 06:14:47 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:40.883 06:14:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.883 06:14:47 -- common/autotest_common.sh@10 -- # set +x 00:19:40.883 true 00:19:40.883 06:14:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:40.883 06:14:47 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:40.883 06:14:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.883 06:14:47 -- common/autotest_common.sh@10 -- # set +x 00:19:40.883 true 00:19:40.883 06:14:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:40.883 06:14:47 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:40.883 06:14:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.883 06:14:47 -- common/autotest_common.sh@10 -- # set +x 00:19:40.883 true 00:19:40.884 06:14:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:40.884 06:14:47 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:40.884 06:14:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:40.884 06:14:47 -- common/autotest_common.sh@10 -- # set +x 00:19:40.884 true 00:19:40.884 06:14:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:40.884 06:14:47 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:44.175 06:14:50 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:44.175 06:14:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.175 06:14:50 -- common/autotest_common.sh@10 -- # set +x 00:19:44.175 true 00:19:44.175 06:14:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.175 06:14:50 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:44.175 06:14:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.175 06:14:50 -- common/autotest_common.sh@10 -- # set +x 00:19:44.175 true 00:19:44.175 06:14:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.175 06:14:50 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:44.175 06:14:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.175 06:14:50 -- common/autotest_common.sh@10 -- # set +x 00:19:44.175 true 00:19:44.175 06:14:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.175 06:14:50 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:44.175 06:14:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.175 06:14:50 -- common/autotest_common.sh@10 -- # set +x 00:19:44.175 true 00:19:44.175 06:14:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.175 06:14:50 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:44.175 06:14:50 -- target/initiator_timeout.sh@54 -- # wait 1159247 00:20:40.418 00:20:40.418 job0: (groupid=0, jobs=1): err= 0: pid=1159322: Sat Jul 13 06:15:44 2024 00:20:40.418 read: IOPS=426, BW=1707KiB/s (1748kB/s)(100MiB/60018msec) 00:20:40.418 slat (usec): min=5, max=11586, avg=12.51, stdev=98.00 00:20:40.418 clat (usec): min=274, max=41053k, avg=2046.79, stdev=256544.11 00:20:40.418 lat (usec): min=280, max=41053k, avg=2059.30, stdev=256544.12 00:20:40.418 clat percentiles (usec): 00:20:40.418 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 302], 00:20:40.418 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 330], 00:20:40.418 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 375], 95.00th=[ 461], 00:20:40.418 | 99.00th=[ 537], 99.50th=[ 586], 99.90th=[41681], 99.95th=[42206], 00:20:40.418 | 99.99th=[42206] 00:20:40.418 write: IOPS=435, BW=1740KiB/s (1782kB/s)(102MiB/60018msec); 0 zone resets 00:20:40.418 slat (usec): min=6, max=31595, avg=17.16, stdev=195.66 00:20:40.418 clat (usec): min=197, max=1978, avg=254.02, stdev=49.43 00:20:40.418 lat (usec): min=204, max=32004, avg=271.18, stdev=204.43 00:20:40.418 clat percentiles (usec): 00:20:40.418 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 215], 20.00th=[ 219], 00:20:40.418 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 245], 00:20:40.418 | 70.00th=[ 269], 80.00th=[ 293], 90.00th=[ 318], 95.00th=[ 359], 00:20:40.418 | 99.00th=[ 408], 99.50th=[ 420], 99.90th=[ 433], 99.95th=[ 437], 00:20:40.418 | 99.99th=[ 1270] 00:20:40.418 bw ( KiB/s): min= 696, max= 8192, per=100.00%, avg=6044.00, stdev=1813.92, samples=34 00:20:40.418 iops : min= 174, max= 2048, avg=1511.00, stdev=453.48, samples=34 00:20:40.418 lat (usec) : 250=31.78%, 500=66.93%, 750=1.14%, 1000=0.01% 00:20:40.418 lat (msec) : 2=0.01%, 4=0.01%, 50=0.13%, >=2000=0.01% 00:20:40.418 cpu : usr=0.92%, sys=1.60%, ctx=51726, majf=0, minf=39 00:20:40.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:40.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.418 issued rwts: total=25609,26112,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:40.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:40.418 00:20:40.418 Run status group 0 (all jobs): 00:20:40.418 READ: bw=1707KiB/s (1748kB/s), 1707KiB/s-1707KiB/s (1748kB/s-1748kB/s), io=100MiB (105MB), run=60018-60018msec 00:20:40.418 WRITE: bw=1740KiB/s (1782kB/s), 1740KiB/s-1740KiB/s (1782kB/s-1782kB/s), io=102MiB (107MB), run=60018-60018msec 00:20:40.418 00:20:40.418 Disk stats (read/write): 00:20:40.418 nvme0n1: ios=25707/26112, merge=0/0, ticks=13579/6364, in_queue=19943, util=99.84% 00:20:40.418 06:15:44 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:40.418 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:40.418 06:15:44 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:40.418 06:15:44 -- common/autotest_common.sh@1198 -- # local i=0 00:20:40.418 06:15:44 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:40.418 06:15:44 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:40.418 06:15:44 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:40.418 06:15:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:40.418 06:15:44 -- common/autotest_common.sh@1210 -- # return 0 00:20:40.418 06:15:44 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:40.418 06:15:44 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:40.418 nvmf hotplug test: fio successful as expected 00:20:40.418 06:15:44 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:40.418 06:15:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:40.418 06:15:44 -- common/autotest_common.sh@10 -- # set +x 00:20:40.418 06:15:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:40.418 06:15:44 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:40.418 06:15:44 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:40.418 06:15:44 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:40.418 06:15:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:40.418 06:15:44 -- nvmf/common.sh@116 -- # sync 00:20:40.418 06:15:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:40.418 06:15:44 -- nvmf/common.sh@119 -- # set +e 00:20:40.418 06:15:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:40.418 06:15:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:40.418 rmmod nvme_tcp 00:20:40.418 rmmod nvme_fabrics 00:20:40.418 rmmod nvme_keyring 00:20:40.418 06:15:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:40.418 06:15:44 -- nvmf/common.sh@123 -- # set -e 00:20:40.418 06:15:44 -- nvmf/common.sh@124 -- # return 0 00:20:40.418 06:15:44 -- nvmf/common.sh@477 -- # '[' -n 1158800 ']' 00:20:40.418 06:15:44 -- nvmf/common.sh@478 -- # killprocess 1158800 00:20:40.418 06:15:44 -- common/autotest_common.sh@926 -- # '[' -z 1158800 ']' 00:20:40.418 06:15:44 -- common/autotest_common.sh@930 -- # kill -0 1158800 00:20:40.418 06:15:44 -- common/autotest_common.sh@931 -- # uname 00:20:40.418 06:15:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:40.418 06:15:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1158800 00:20:40.418 06:15:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:40.418 06:15:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:40.418 06:15:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1158800' 00:20:40.418 killing process with pid 1158800 00:20:40.418 06:15:44 -- common/autotest_common.sh@945 -- # kill 1158800 00:20:40.419 06:15:44 -- common/autotest_common.sh@950 -- # wait 1158800 00:20:40.419 06:15:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:40.419 06:15:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:40.419 06:15:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:40.419 06:15:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.419 06:15:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:40.419 06:15:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.419 06:15:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.419 06:15:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.678 06:15:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:40.678 00:20:40.678 real 1m8.831s 00:20:40.678 user 4m13.310s 00:20:40.678 sys 0m8.257s 00:20:40.678 06:15:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:40.678 06:15:47 -- common/autotest_common.sh@10 -- # set +x 00:20:40.678 ************************************ 00:20:40.678 END TEST nvmf_initiator_timeout 00:20:40.678 ************************************ 00:20:40.678 06:15:47 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:20:40.678 06:15:47 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:20:40.678 06:15:47 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:20:40.678 06:15:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:40.678 06:15:47 -- common/autotest_common.sh@10 -- # set +x 00:20:42.581 06:15:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:42.581 06:15:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:42.581 06:15:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:42.581 06:15:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:42.581 06:15:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:42.581 06:15:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:42.581 06:15:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:42.581 06:15:49 -- nvmf/common.sh@294 -- # net_devs=() 00:20:42.581 06:15:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:42.840 06:15:49 -- nvmf/common.sh@295 -- # e810=() 00:20:42.840 06:15:49 -- nvmf/common.sh@295 -- # local -ga e810 00:20:42.840 06:15:49 -- nvmf/common.sh@296 -- # x722=() 00:20:42.840 06:15:49 -- nvmf/common.sh@296 -- # local -ga x722 00:20:42.840 06:15:49 -- nvmf/common.sh@297 -- # mlx=() 00:20:42.840 06:15:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:42.840 06:15:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.840 06:15:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.840 06:15:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.840 06:15:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.840 06:15:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.840 06:15:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.840 06:15:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.840 06:15:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.840 06:15:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.840 06:15:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.840 06:15:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.840 06:15:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:42.840 06:15:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:42.840 06:15:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:42.840 06:15:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:42.840 06:15:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:42.840 06:15:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:42.840 06:15:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:42.840 06:15:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:42.840 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:42.840 06:15:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:42.840 06:15:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:42.840 06:15:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.840 06:15:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.840 06:15:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:42.840 06:15:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:42.840 06:15:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:42.840 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:42.840 06:15:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:42.840 06:15:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:42.840 06:15:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.840 06:15:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.840 06:15:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:42.840 06:15:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:42.840 06:15:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:42.840 06:15:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:42.840 06:15:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:42.840 06:15:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.840 06:15:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:42.840 06:15:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.840 06:15:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:42.840 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:42.840 06:15:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.840 06:15:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:42.840 06:15:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.840 06:15:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:42.840 06:15:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.840 06:15:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:42.840 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:42.840 06:15:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.840 06:15:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:42.840 06:15:49 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.840 06:15:49 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:20:42.840 06:15:49 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:42.840 06:15:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:42.840 06:15:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:42.840 06:15:49 -- common/autotest_common.sh@10 -- # set +x 00:20:42.840 ************************************ 00:20:42.840 START TEST nvmf_perf_adq 00:20:42.840 ************************************ 00:20:42.840 06:15:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:42.840 * Looking for test storage... 00:20:42.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:42.841 06:15:49 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.841 06:15:49 -- nvmf/common.sh@7 -- # uname -s 00:20:42.841 06:15:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.841 06:15:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.841 06:15:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.841 06:15:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.841 06:15:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.841 06:15:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.841 06:15:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.841 06:15:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.841 06:15:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.841 06:15:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.841 06:15:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.841 06:15:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.841 06:15:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.841 06:15:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.841 06:15:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.841 06:15:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.841 06:15:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.841 06:15:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.841 06:15:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.841 06:15:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.841 06:15:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.841 06:15:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.841 06:15:49 -- paths/export.sh@5 -- # export PATH 00:20:42.841 06:15:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.841 06:15:49 -- nvmf/common.sh@46 -- # : 0 00:20:42.841 06:15:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:42.841 06:15:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:42.841 06:15:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:42.841 06:15:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.841 06:15:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.841 06:15:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:42.841 06:15:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:42.841 06:15:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:42.841 06:15:49 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:42.841 06:15:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:42.841 06:15:49 -- common/autotest_common.sh@10 -- # set +x 00:20:44.745 06:15:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:44.745 06:15:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:44.745 06:15:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:44.745 06:15:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:44.745 06:15:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:44.745 06:15:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:44.745 06:15:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:44.745 06:15:51 -- nvmf/common.sh@294 -- # net_devs=() 00:20:44.745 06:15:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:44.745 06:15:51 -- nvmf/common.sh@295 -- # e810=() 00:20:44.745 06:15:51 -- nvmf/common.sh@295 -- # local -ga e810 00:20:44.745 06:15:51 -- nvmf/common.sh@296 -- # x722=() 00:20:44.745 06:15:51 -- nvmf/common.sh@296 -- # local -ga x722 00:20:44.745 06:15:51 -- nvmf/common.sh@297 -- # mlx=() 00:20:44.745 06:15:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:44.745 06:15:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.745 06:15:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.745 06:15:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.745 06:15:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.745 06:15:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.745 06:15:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.745 06:15:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.746 06:15:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.746 06:15:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.746 06:15:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.746 06:15:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.746 06:15:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:44.746 06:15:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:44.746 06:15:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:44.746 06:15:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:44.746 06:15:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:44.746 06:15:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:44.746 06:15:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:44.746 06:15:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:44.746 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:44.746 06:15:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:44.746 06:15:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:44.746 06:15:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.746 06:15:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.746 06:15:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:44.746 06:15:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:44.746 06:15:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:44.746 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:44.746 06:15:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:44.746 06:15:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:44.746 06:15:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.746 06:15:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.746 06:15:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:44.746 06:15:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:44.746 06:15:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:44.746 06:15:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:44.746 06:15:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:44.746 06:15:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.746 06:15:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:44.746 06:15:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.746 06:15:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:44.746 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:44.746 06:15:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.746 06:15:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:44.746 06:15:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.746 06:15:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:44.746 06:15:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.746 06:15:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:44.746 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:44.746 06:15:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.746 06:15:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:44.746 06:15:51 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.746 06:15:51 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:44.746 06:15:51 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:44.746 06:15:51 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:20:44.746 06:15:51 -- target/perf_adq.sh@52 -- # rmmod ice 00:20:45.314 06:15:51 -- target/perf_adq.sh@53 -- # modprobe ice 00:20:47.223 06:15:53 -- target/perf_adq.sh@54 -- # sleep 5 00:20:52.491 06:15:58 -- target/perf_adq.sh@67 -- # nvmftestinit 00:20:52.491 06:15:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:52.491 06:15:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:52.491 06:15:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:52.491 06:15:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:52.491 06:15:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:52.491 06:15:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:52.491 06:15:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:52.491 06:15:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.491 06:15:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:52.491 06:15:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:52.491 06:15:58 -- common/autotest_common.sh@10 -- # set +x 00:20:52.491 06:15:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:52.491 06:15:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:52.491 06:15:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:52.491 06:15:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:52.491 06:15:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:52.491 06:15:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:52.491 06:15:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:52.491 06:15:58 -- nvmf/common.sh@294 -- # net_devs=() 00:20:52.491 06:15:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:52.491 06:15:58 -- nvmf/common.sh@295 -- # e810=() 00:20:52.491 06:15:58 -- nvmf/common.sh@295 -- # local -ga e810 00:20:52.491 06:15:58 -- nvmf/common.sh@296 -- # x722=() 00:20:52.491 06:15:58 -- nvmf/common.sh@296 -- # local -ga x722 00:20:52.491 06:15:58 -- nvmf/common.sh@297 -- # mlx=() 00:20:52.491 06:15:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:52.491 06:15:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.491 06:15:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.491 06:15:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.491 06:15:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.491 06:15:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.491 06:15:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.491 06:15:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.491 06:15:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.491 06:15:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.491 06:15:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.491 06:15:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.491 06:15:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:52.491 06:15:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:52.491 06:15:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:52.491 06:15:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:52.491 06:15:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:52.491 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:52.491 06:15:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:52.491 06:15:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:52.491 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:52.491 06:15:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:52.491 06:15:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:52.491 06:15:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.491 06:15:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:52.491 06:15:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.491 06:15:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:52.491 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:52.491 06:15:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.491 06:15:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:52.491 06:15:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.491 06:15:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:52.491 06:15:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.491 06:15:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:52.491 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:52.491 06:15:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.491 06:15:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:52.491 06:15:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:52.491 06:15:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:52.491 06:15:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:52.491 06:15:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:52.491 06:15:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.491 06:15:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:52.491 06:15:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:52.491 06:15:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:52.491 06:15:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:52.491 06:15:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:52.491 06:15:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:52.491 06:15:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:52.491 06:15:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:52.491 06:15:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:52.491 06:15:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:52.491 06:15:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:52.491 06:15:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:52.491 06:15:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:52.491 06:15:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:52.491 06:15:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:52.491 06:15:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:52.491 06:15:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:52.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:52.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:20:52.491 00:20:52.491 --- 10.0.0.2 ping statistics --- 00:20:52.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.491 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:20:52.491 06:15:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:52.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:52.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:20:52.491 00:20:52.491 --- 10.0.0.1 ping statistics --- 00:20:52.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:52.491 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:20:52.491 06:15:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:52.491 06:15:58 -- nvmf/common.sh@410 -- # return 0 00:20:52.491 06:15:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:52.491 06:15:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:52.491 06:15:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:52.491 06:15:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:52.491 06:15:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:52.491 06:15:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:52.491 06:15:58 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:52.491 06:15:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:52.491 06:15:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:52.491 06:15:58 -- common/autotest_common.sh@10 -- # set +x 00:20:52.491 06:15:58 -- nvmf/common.sh@469 -- # nvmfpid=1171729 00:20:52.491 06:15:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:52.491 06:15:58 -- nvmf/common.sh@470 -- # waitforlisten 1171729 00:20:52.491 06:15:58 -- common/autotest_common.sh@819 -- # '[' -z 1171729 ']' 00:20:52.491 06:15:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:52.491 06:15:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:52.491 06:15:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:52.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:52.491 06:15:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:52.492 06:15:58 -- common/autotest_common.sh@10 -- # set +x 00:20:52.492 [2024-07-13 06:15:58.832341] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:52.492 [2024-07-13 06:15:58.832402] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.492 EAL: No free 2048 kB hugepages reported on node 1 00:20:52.492 [2024-07-13 06:15:58.898277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:52.752 [2024-07-13 06:15:59.018925] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:52.752 [2024-07-13 06:15:59.019060] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.752 [2024-07-13 06:15:59.019079] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.752 [2024-07-13 06:15:59.019091] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.752 [2024-07-13 06:15:59.019196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.752 [2024-07-13 06:15:59.019263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.752 [2024-07-13 06:15:59.019318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:52.752 [2024-07-13 06:15:59.019321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.752 06:15:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:52.752 06:15:59 -- common/autotest_common.sh@852 -- # return 0 00:20:52.752 06:15:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:52.752 06:15:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:52.752 06:15:59 -- common/autotest_common.sh@10 -- # set +x 00:20:52.752 06:15:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.752 06:15:59 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:20:52.752 06:15:59 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:52.752 06:15:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:52.752 06:15:59 -- common/autotest_common.sh@10 -- # set +x 00:20:52.752 06:15:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:52.752 06:15:59 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:20:52.752 06:15:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:52.752 06:15:59 -- common/autotest_common.sh@10 -- # set +x 00:20:52.752 06:15:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:52.752 06:15:59 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:52.752 06:15:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:52.752 06:15:59 -- common/autotest_common.sh@10 -- # set +x 00:20:52.752 [2024-07-13 06:15:59.221822] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.752 06:15:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:52.752 06:15:59 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:52.752 06:15:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:52.752 06:15:59 -- common/autotest_common.sh@10 -- # set +x 00:20:52.752 Malloc1 00:20:52.752 06:15:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:52.752 06:15:59 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:52.752 06:15:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:52.752 06:15:59 -- common/autotest_common.sh@10 -- # set +x 00:20:53.012 06:15:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:53.012 06:15:59 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:53.012 06:15:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:53.012 06:15:59 -- common/autotest_common.sh@10 -- # set +x 00:20:53.012 06:15:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:53.012 06:15:59 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:53.012 06:15:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:53.012 06:15:59 -- common/autotest_common.sh@10 -- # set +x 00:20:53.012 [2024-07-13 06:15:59.275190] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.012 06:15:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:53.012 06:15:59 -- target/perf_adq.sh@73 -- # perfpid=1171882 00:20:53.012 06:15:59 -- target/perf_adq.sh@74 -- # sleep 2 00:20:53.012 06:15:59 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:53.012 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.918 06:16:01 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:20:54.918 06:16:01 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:54.918 06:16:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:54.918 06:16:01 -- target/perf_adq.sh@76 -- # wc -l 00:20:54.918 06:16:01 -- common/autotest_common.sh@10 -- # set +x 00:20:54.918 06:16:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:54.918 06:16:01 -- target/perf_adq.sh@76 -- # count=4 00:20:54.918 06:16:01 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:20:54.918 06:16:01 -- target/perf_adq.sh@81 -- # wait 1171882 00:21:03.032 Initializing NVMe Controllers 00:21:03.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:03.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:03.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:03.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:03.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:03.032 Initialization complete. Launching workers. 00:21:03.032 ======================================================== 00:21:03.032 Latency(us) 00:21:03.032 Device Information : IOPS MiB/s Average min max 00:21:03.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11216.68 43.82 5706.25 1273.27 9004.22 00:21:03.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10925.38 42.68 5858.64 1231.91 10295.68 00:21:03.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11322.87 44.23 5653.36 1023.59 9608.50 00:21:03.032 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11547.97 45.11 5542.19 1495.21 9291.59 00:21:03.032 ======================================================== 00:21:03.032 Total : 45012.90 175.83 5687.84 1023.59 10295.68 00:21:03.032 00:21:03.032 06:16:09 -- target/perf_adq.sh@82 -- # nvmftestfini 00:21:03.032 06:16:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:03.032 06:16:09 -- nvmf/common.sh@116 -- # sync 00:21:03.032 06:16:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:03.032 06:16:09 -- nvmf/common.sh@119 -- # set +e 00:21:03.032 06:16:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:03.032 06:16:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:03.032 rmmod nvme_tcp 00:21:03.032 rmmod nvme_fabrics 00:21:03.032 rmmod nvme_keyring 00:21:03.032 06:16:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:03.032 06:16:09 -- nvmf/common.sh@123 -- # set -e 00:21:03.032 06:16:09 -- nvmf/common.sh@124 -- # return 0 00:21:03.032 06:16:09 -- nvmf/common.sh@477 -- # '[' -n 1171729 ']' 00:21:03.032 06:16:09 -- nvmf/common.sh@478 -- # killprocess 1171729 00:21:03.032 06:16:09 -- common/autotest_common.sh@926 -- # '[' -z 1171729 ']' 00:21:03.032 06:16:09 -- common/autotest_common.sh@930 -- # kill -0 1171729 00:21:03.032 06:16:09 -- common/autotest_common.sh@931 -- # uname 00:21:03.032 06:16:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:03.032 06:16:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1171729 00:21:03.032 06:16:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:03.032 06:16:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:03.032 06:16:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1171729' 00:21:03.032 killing process with pid 1171729 00:21:03.032 06:16:09 -- common/autotest_common.sh@945 -- # kill 1171729 00:21:03.032 06:16:09 -- common/autotest_common.sh@950 -- # wait 1171729 00:21:03.290 06:16:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:03.290 06:16:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:03.290 06:16:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:03.290 06:16:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:03.290 06:16:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:03.290 06:16:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.290 06:16:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.290 06:16:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.825 06:16:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:05.825 06:16:11 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:21:05.825 06:16:11 -- target/perf_adq.sh@52 -- # rmmod ice 00:21:06.084 06:16:12 -- target/perf_adq.sh@53 -- # modprobe ice 00:21:07.992 06:16:14 -- target/perf_adq.sh@54 -- # sleep 5 00:21:13.269 06:16:19 -- target/perf_adq.sh@87 -- # nvmftestinit 00:21:13.269 06:16:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:13.269 06:16:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.269 06:16:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:13.269 06:16:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:13.269 06:16:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:13.269 06:16:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.269 06:16:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.269 06:16:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.269 06:16:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:13.269 06:16:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:13.269 06:16:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:13.269 06:16:19 -- common/autotest_common.sh@10 -- # set +x 00:21:13.269 06:16:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:13.269 06:16:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:13.269 06:16:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:13.269 06:16:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:13.269 06:16:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:13.269 06:16:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:13.269 06:16:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:13.269 06:16:19 -- nvmf/common.sh@294 -- # net_devs=() 00:21:13.269 06:16:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:13.269 06:16:19 -- nvmf/common.sh@295 -- # e810=() 00:21:13.269 06:16:19 -- nvmf/common.sh@295 -- # local -ga e810 00:21:13.269 06:16:19 -- nvmf/common.sh@296 -- # x722=() 00:21:13.269 06:16:19 -- nvmf/common.sh@296 -- # local -ga x722 00:21:13.269 06:16:19 -- nvmf/common.sh@297 -- # mlx=() 00:21:13.269 06:16:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:13.269 06:16:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:13.269 06:16:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:13.269 06:16:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:13.269 06:16:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:13.269 06:16:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:13.269 06:16:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:13.269 06:16:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:13.269 06:16:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:13.269 06:16:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:13.269 06:16:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:13.269 06:16:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:13.269 06:16:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:13.269 06:16:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:13.269 06:16:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:13.269 06:16:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:13.269 06:16:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:13.269 06:16:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:13.269 06:16:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:13.269 06:16:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:13.269 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:13.269 06:16:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:13.269 06:16:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:13.269 06:16:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.269 06:16:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.269 06:16:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:13.269 06:16:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:13.269 06:16:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:13.269 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:13.269 06:16:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:13.269 06:16:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:13.269 06:16:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:13.269 06:16:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:13.269 06:16:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:13.269 06:16:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:13.269 06:16:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:13.269 06:16:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:13.269 06:16:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:13.269 06:16:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.269 06:16:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:13.269 06:16:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.269 06:16:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:13.269 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:13.269 06:16:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.269 06:16:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:13.269 06:16:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:13.269 06:16:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:13.269 06:16:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:13.269 06:16:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:13.269 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:13.269 06:16:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:13.269 06:16:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:13.269 06:16:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:13.269 06:16:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:13.269 06:16:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:13.269 06:16:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:13.269 06:16:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.269 06:16:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.269 06:16:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:13.269 06:16:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:13.269 06:16:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:13.269 06:16:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:13.269 06:16:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:13.269 06:16:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:13.269 06:16:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.269 06:16:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:13.269 06:16:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:13.269 06:16:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:13.269 06:16:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:13.269 06:16:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:13.269 06:16:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:13.269 06:16:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:13.269 06:16:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:13.269 06:16:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:13.269 06:16:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:13.269 06:16:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:13.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:13.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:21:13.269 00:21:13.269 --- 10.0.0.2 ping statistics --- 00:21:13.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.269 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:21:13.269 06:16:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:13.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:13.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:21:13.269 00:21:13.270 --- 10.0.0.1 ping statistics --- 00:21:13.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:13.270 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:21:13.270 06:16:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:13.270 06:16:19 -- nvmf/common.sh@410 -- # return 0 00:21:13.270 06:16:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:13.270 06:16:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:13.270 06:16:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:13.270 06:16:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:13.270 06:16:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:13.270 06:16:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:13.270 06:16:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:13.270 06:16:19 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:21:13.270 06:16:19 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:13.270 06:16:19 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:13.270 06:16:19 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:13.270 net.core.busy_poll = 1 00:21:13.270 06:16:19 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:13.270 net.core.busy_read = 1 00:21:13.270 06:16:19 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:13.270 06:16:19 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:13.270 06:16:19 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:13.270 06:16:19 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:13.270 06:16:19 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:13.270 06:16:19 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:13.270 06:16:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:13.270 06:16:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:13.270 06:16:19 -- common/autotest_common.sh@10 -- # set +x 00:21:13.270 06:16:19 -- nvmf/common.sh@469 -- # nvmfpid=1174585 00:21:13.270 06:16:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:13.270 06:16:19 -- nvmf/common.sh@470 -- # waitforlisten 1174585 00:21:13.270 06:16:19 -- common/autotest_common.sh@819 -- # '[' -z 1174585 ']' 00:21:13.270 06:16:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.270 06:16:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:13.270 06:16:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.270 06:16:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:13.270 06:16:19 -- common/autotest_common.sh@10 -- # set +x 00:21:13.270 [2024-07-13 06:16:19.774569] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:13.270 [2024-07-13 06:16:19.774666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:13.531 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.531 [2024-07-13 06:16:19.841247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:13.531 [2024-07-13 06:16:19.947781] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:13.531 [2024-07-13 06:16:19.947937] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:13.531 [2024-07-13 06:16:19.947957] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:13.531 [2024-07-13 06:16:19.947970] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:13.531 [2024-07-13 06:16:19.948020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.531 [2024-07-13 06:16:19.948079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.531 [2024-07-13 06:16:19.948154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:13.531 [2024-07-13 06:16:19.948157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:13.531 06:16:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:13.531 06:16:19 -- common/autotest_common.sh@852 -- # return 0 00:21:13.531 06:16:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:13.531 06:16:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:13.531 06:16:19 -- common/autotest_common.sh@10 -- # set +x 00:21:13.531 06:16:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:13.531 06:16:19 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:21:13.531 06:16:19 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:13.531 06:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.531 06:16:19 -- common/autotest_common.sh@10 -- # set +x 00:21:13.531 06:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.531 06:16:20 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:21:13.531 06:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.531 06:16:20 -- common/autotest_common.sh@10 -- # set +x 00:21:13.793 06:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.793 06:16:20 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:13.793 06:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.793 06:16:20 -- common/autotest_common.sh@10 -- # set +x 00:21:13.793 [2024-07-13 06:16:20.128913] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.793 06:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.793 06:16:20 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:13.793 06:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.793 06:16:20 -- common/autotest_common.sh@10 -- # set +x 00:21:13.793 Malloc1 00:21:13.793 06:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.793 06:16:20 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:13.793 06:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.793 06:16:20 -- common/autotest_common.sh@10 -- # set +x 00:21:13.793 06:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.793 06:16:20 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:13.793 06:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.793 06:16:20 -- common/autotest_common.sh@10 -- # set +x 00:21:13.793 06:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.793 06:16:20 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:13.793 06:16:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:13.793 06:16:20 -- common/autotest_common.sh@10 -- # set +x 00:21:13.793 [2024-07-13 06:16:20.182250] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.793 06:16:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:13.793 06:16:20 -- target/perf_adq.sh@94 -- # perfpid=1174616 00:21:13.793 06:16:20 -- target/perf_adq.sh@95 -- # sleep 2 00:21:13.793 06:16:20 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:13.793 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.696 06:16:22 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:21:15.696 06:16:22 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:15.696 06:16:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:15.696 06:16:22 -- target/perf_adq.sh@97 -- # wc -l 00:21:15.696 06:16:22 -- common/autotest_common.sh@10 -- # set +x 00:21:15.696 06:16:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:15.954 06:16:22 -- target/perf_adq.sh@97 -- # count=2 00:21:15.954 06:16:22 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:21:15.954 06:16:22 -- target/perf_adq.sh@103 -- # wait 1174616 00:21:24.072 Initializing NVMe Controllers 00:21:24.072 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:24.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:24.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:24.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:24.072 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:24.072 Initialization complete. Launching workers. 00:21:24.072 ======================================================== 00:21:24.072 Latency(us) 00:21:24.072 Device Information : IOPS MiB/s Average min max 00:21:24.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4971.54 19.42 12903.26 1970.95 60252.27 00:21:24.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5110.04 19.96 12528.16 1519.74 61110.24 00:21:24.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13473.83 52.63 4765.20 1065.36 45607.15 00:21:24.072 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4334.15 16.93 14769.91 2281.25 59827.13 00:21:24.072 ======================================================== 00:21:24.072 Total : 27889.55 108.94 9193.01 1065.36 61110.24 00:21:24.072 00:21:24.072 06:16:30 -- target/perf_adq.sh@104 -- # nvmftestfini 00:21:24.072 06:16:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:24.072 06:16:30 -- nvmf/common.sh@116 -- # sync 00:21:24.072 06:16:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:24.072 06:16:30 -- nvmf/common.sh@119 -- # set +e 00:21:24.072 06:16:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:24.072 06:16:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:24.072 rmmod nvme_tcp 00:21:24.072 rmmod nvme_fabrics 00:21:24.072 rmmod nvme_keyring 00:21:24.072 06:16:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:24.072 06:16:30 -- nvmf/common.sh@123 -- # set -e 00:21:24.072 06:16:30 -- nvmf/common.sh@124 -- # return 0 00:21:24.072 06:16:30 -- nvmf/common.sh@477 -- # '[' -n 1174585 ']' 00:21:24.072 06:16:30 -- nvmf/common.sh@478 -- # killprocess 1174585 00:21:24.072 06:16:30 -- common/autotest_common.sh@926 -- # '[' -z 1174585 ']' 00:21:24.072 06:16:30 -- common/autotest_common.sh@930 -- # kill -0 1174585 00:21:24.072 06:16:30 -- common/autotest_common.sh@931 -- # uname 00:21:24.072 06:16:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:24.072 06:16:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1174585 00:21:24.072 06:16:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:24.072 06:16:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:24.072 06:16:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1174585' 00:21:24.072 killing process with pid 1174585 00:21:24.073 06:16:30 -- common/autotest_common.sh@945 -- # kill 1174585 00:21:24.073 06:16:30 -- common/autotest_common.sh@950 -- # wait 1174585 00:21:24.331 06:16:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:24.331 06:16:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:24.331 06:16:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:24.331 06:16:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:24.331 06:16:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:24.331 06:16:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.331 06:16:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:24.331 06:16:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.865 06:16:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:26.865 06:16:32 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:21:26.865 00:21:26.865 real 0m43.666s 00:21:26.865 user 2m32.718s 00:21:26.865 sys 0m12.414s 00:21:26.865 06:16:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:26.865 06:16:32 -- common/autotest_common.sh@10 -- # set +x 00:21:26.865 ************************************ 00:21:26.865 END TEST nvmf_perf_adq 00:21:26.865 ************************************ 00:21:26.865 06:16:32 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:26.865 06:16:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:26.865 06:16:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:26.865 06:16:32 -- common/autotest_common.sh@10 -- # set +x 00:21:26.865 ************************************ 00:21:26.865 START TEST nvmf_shutdown 00:21:26.865 ************************************ 00:21:26.865 06:16:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:26.865 * Looking for test storage... 00:21:26.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:26.865 06:16:32 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.865 06:16:32 -- nvmf/common.sh@7 -- # uname -s 00:21:26.865 06:16:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.865 06:16:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.865 06:16:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.865 06:16:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.865 06:16:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.865 06:16:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.865 06:16:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.865 06:16:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.865 06:16:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.865 06:16:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.865 06:16:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.865 06:16:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.865 06:16:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.865 06:16:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.865 06:16:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.865 06:16:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:26.865 06:16:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.865 06:16:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.865 06:16:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.865 06:16:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.865 06:16:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.865 06:16:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.865 06:16:32 -- paths/export.sh@5 -- # export PATH 00:21:26.865 06:16:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.865 06:16:32 -- nvmf/common.sh@46 -- # : 0 00:21:26.865 06:16:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:26.865 06:16:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:26.865 06:16:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:26.865 06:16:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.865 06:16:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.865 06:16:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:26.865 06:16:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:26.865 06:16:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:26.865 06:16:32 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:26.865 06:16:32 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:26.865 06:16:32 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:26.865 06:16:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:26.865 06:16:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:26.865 06:16:32 -- common/autotest_common.sh@10 -- # set +x 00:21:26.865 ************************************ 00:21:26.865 START TEST nvmf_shutdown_tc1 00:21:26.865 ************************************ 00:21:26.865 06:16:32 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:21:26.865 06:16:32 -- target/shutdown.sh@74 -- # starttarget 00:21:26.865 06:16:32 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:26.865 06:16:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:26.865 06:16:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.865 06:16:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:26.865 06:16:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:26.865 06:16:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:26.865 06:16:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.865 06:16:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.865 06:16:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.865 06:16:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:26.865 06:16:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:26.865 06:16:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:26.865 06:16:32 -- common/autotest_common.sh@10 -- # set +x 00:21:28.768 06:16:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:28.769 06:16:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:28.769 06:16:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:28.769 06:16:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:28.769 06:16:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:28.769 06:16:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:28.769 06:16:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:28.769 06:16:34 -- nvmf/common.sh@294 -- # net_devs=() 00:21:28.769 06:16:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:28.769 06:16:34 -- nvmf/common.sh@295 -- # e810=() 00:21:28.769 06:16:34 -- nvmf/common.sh@295 -- # local -ga e810 00:21:28.769 06:16:34 -- nvmf/common.sh@296 -- # x722=() 00:21:28.769 06:16:34 -- nvmf/common.sh@296 -- # local -ga x722 00:21:28.769 06:16:34 -- nvmf/common.sh@297 -- # mlx=() 00:21:28.769 06:16:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:28.769 06:16:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:28.769 06:16:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:28.769 06:16:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:28.769 06:16:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:28.769 06:16:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:28.769 06:16:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:28.769 06:16:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:28.769 06:16:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:28.769 06:16:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:28.769 06:16:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:28.769 06:16:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:28.769 06:16:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:28.769 06:16:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:28.769 06:16:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:28.769 06:16:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:28.769 06:16:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:28.769 06:16:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:28.769 06:16:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:28.769 06:16:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:28.769 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:28.769 06:16:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:28.769 06:16:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:28.769 06:16:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.769 06:16:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.769 06:16:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:28.769 06:16:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:28.769 06:16:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:28.769 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:28.769 06:16:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:28.769 06:16:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:28.769 06:16:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:28.769 06:16:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:28.769 06:16:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:28.769 06:16:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:28.769 06:16:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:28.769 06:16:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:28.769 06:16:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:28.769 06:16:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.769 06:16:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:28.769 06:16:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.769 06:16:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:28.769 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:28.769 06:16:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.769 06:16:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:28.769 06:16:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:28.769 06:16:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:28.769 06:16:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:28.769 06:16:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:28.769 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:28.769 06:16:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:28.769 06:16:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:28.769 06:16:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:28.769 06:16:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:28.769 06:16:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:28.769 06:16:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:28.769 06:16:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.769 06:16:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.769 06:16:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:28.769 06:16:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:28.769 06:16:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:28.769 06:16:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:28.769 06:16:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:28.769 06:16:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:28.769 06:16:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.769 06:16:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:28.769 06:16:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:28.769 06:16:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:28.769 06:16:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:28.769 06:16:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:28.769 06:16:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:28.769 06:16:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:28.769 06:16:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.769 06:16:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.769 06:16:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.769 06:16:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:28.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:21:28.769 00:21:28.769 --- 10.0.0.2 ping statistics --- 00:21:28.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.769 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:21:28.769 06:16:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:21:28.769 00:21:28.769 --- 10.0.0.1 ping statistics --- 00:21:28.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.769 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:21:28.769 06:16:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.769 06:16:35 -- nvmf/common.sh@410 -- # return 0 00:21:28.769 06:16:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:28.769 06:16:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.769 06:16:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:28.769 06:16:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:28.769 06:16:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.769 06:16:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:28.769 06:16:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:28.769 06:16:35 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:28.769 06:16:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:28.769 06:16:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:28.769 06:16:35 -- common/autotest_common.sh@10 -- # set +x 00:21:28.769 06:16:35 -- nvmf/common.sh@469 -- # nvmfpid=1177823 00:21:28.769 06:16:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:28.769 06:16:35 -- nvmf/common.sh@470 -- # waitforlisten 1177823 00:21:28.769 06:16:35 -- common/autotest_common.sh@819 -- # '[' -z 1177823 ']' 00:21:28.769 06:16:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.769 06:16:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:28.769 06:16:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.769 06:16:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:28.769 06:16:35 -- common/autotest_common.sh@10 -- # set +x 00:21:28.769 [2024-07-13 06:16:35.170380] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:28.769 [2024-07-13 06:16:35.170447] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.769 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.769 [2024-07-13 06:16:35.233569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:29.029 [2024-07-13 06:16:35.344683] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:29.029 [2024-07-13 06:16:35.344840] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.029 [2024-07-13 06:16:35.344857] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.029 [2024-07-13 06:16:35.344887] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.029 [2024-07-13 06:16:35.344982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.029 [2024-07-13 06:16:35.345046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:29.029 [2024-07-13 06:16:35.345068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:29.029 [2024-07-13 06:16:35.345071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.606 06:16:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:29.606 06:16:36 -- common/autotest_common.sh@852 -- # return 0 00:21:29.606 06:16:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:29.606 06:16:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:29.606 06:16:36 -- common/autotest_common.sh@10 -- # set +x 00:21:29.866 06:16:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.866 06:16:36 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:29.866 06:16:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:29.866 06:16:36 -- common/autotest_common.sh@10 -- # set +x 00:21:29.866 [2024-07-13 06:16:36.129344] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.866 06:16:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:29.866 06:16:36 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:29.866 06:16:36 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:29.866 06:16:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:29.866 06:16:36 -- common/autotest_common.sh@10 -- # set +x 00:21:29.866 06:16:36 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:29.866 06:16:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:29.866 06:16:36 -- target/shutdown.sh@28 -- # cat 00:21:29.866 06:16:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:29.866 06:16:36 -- target/shutdown.sh@28 -- # cat 00:21:29.866 06:16:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:29.866 06:16:36 -- target/shutdown.sh@28 -- # cat 00:21:29.866 06:16:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:29.866 06:16:36 -- target/shutdown.sh@28 -- # cat 00:21:29.866 06:16:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:29.866 06:16:36 -- target/shutdown.sh@28 -- # cat 00:21:29.866 06:16:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:29.866 06:16:36 -- target/shutdown.sh@28 -- # cat 00:21:29.866 06:16:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:29.866 06:16:36 -- target/shutdown.sh@28 -- # cat 00:21:29.866 06:16:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:29.866 06:16:36 -- target/shutdown.sh@28 -- # cat 00:21:29.866 06:16:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:29.866 06:16:36 -- target/shutdown.sh@28 -- # cat 00:21:29.866 06:16:36 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:29.866 06:16:36 -- target/shutdown.sh@28 -- # cat 00:21:29.866 06:16:36 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:29.866 06:16:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:29.866 06:16:36 -- common/autotest_common.sh@10 -- # set +x 00:21:29.866 Malloc1 00:21:29.866 [2024-07-13 06:16:36.218641] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.866 Malloc2 00:21:29.866 Malloc3 00:21:29.866 Malloc4 00:21:30.124 Malloc5 00:21:30.124 Malloc6 00:21:30.124 Malloc7 00:21:30.124 Malloc8 00:21:30.124 Malloc9 00:21:30.383 Malloc10 00:21:30.383 06:16:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:30.383 06:16:36 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:30.383 06:16:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:30.383 06:16:36 -- common/autotest_common.sh@10 -- # set +x 00:21:30.383 06:16:36 -- target/shutdown.sh@78 -- # perfpid=1178135 00:21:30.383 06:16:36 -- target/shutdown.sh@79 -- # waitforlisten 1178135 /var/tmp/bdevperf.sock 00:21:30.383 06:16:36 -- common/autotest_common.sh@819 -- # '[' -z 1178135 ']' 00:21:30.383 06:16:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.383 06:16:36 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:30.383 06:16:36 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:30.383 06:16:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:30.383 06:16:36 -- nvmf/common.sh@520 -- # config=() 00:21:30.383 06:16:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.383 06:16:36 -- nvmf/common.sh@520 -- # local subsystem config 00:21:30.383 06:16:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:30.383 06:16:36 -- common/autotest_common.sh@10 -- # set +x 00:21:30.383 06:16:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:30.383 06:16:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:30.383 { 00:21:30.383 "params": { 00:21:30.383 "name": "Nvme$subsystem", 00:21:30.383 "trtype": "$TEST_TRANSPORT", 00:21:30.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.383 "adrfam": "ipv4", 00:21:30.383 "trsvcid": "$NVMF_PORT", 00:21:30.383 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.383 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.383 "hdgst": ${hdgst:-false}, 00:21:30.383 "ddgst": ${ddgst:-false} 00:21:30.383 }, 00:21:30.383 "method": "bdev_nvme_attach_controller" 00:21:30.383 } 00:21:30.383 EOF 00:21:30.383 )") 00:21:30.383 06:16:36 -- nvmf/common.sh@542 -- # cat 00:21:30.383 06:16:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:30.383 06:16:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:30.383 { 00:21:30.383 "params": { 00:21:30.383 "name": "Nvme$subsystem", 00:21:30.383 "trtype": "$TEST_TRANSPORT", 00:21:30.383 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.383 "adrfam": "ipv4", 00:21:30.384 "trsvcid": "$NVMF_PORT", 00:21:30.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.384 "hdgst": ${hdgst:-false}, 00:21:30.384 "ddgst": ${ddgst:-false} 00:21:30.384 }, 00:21:30.384 "method": "bdev_nvme_attach_controller" 00:21:30.384 } 00:21:30.384 EOF 00:21:30.384 )") 00:21:30.384 06:16:36 -- nvmf/common.sh@542 -- # cat 00:21:30.384 06:16:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:30.384 06:16:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:30.384 { 00:21:30.384 "params": { 00:21:30.384 "name": "Nvme$subsystem", 00:21:30.384 "trtype": "$TEST_TRANSPORT", 00:21:30.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.384 "adrfam": "ipv4", 00:21:30.384 "trsvcid": "$NVMF_PORT", 00:21:30.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.384 "hdgst": ${hdgst:-false}, 00:21:30.384 "ddgst": ${ddgst:-false} 00:21:30.384 }, 00:21:30.384 "method": "bdev_nvme_attach_controller" 00:21:30.384 } 00:21:30.384 EOF 00:21:30.384 )") 00:21:30.384 06:16:36 -- nvmf/common.sh@542 -- # cat 00:21:30.384 06:16:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:30.384 06:16:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:30.384 { 00:21:30.384 "params": { 00:21:30.384 "name": "Nvme$subsystem", 00:21:30.384 "trtype": "$TEST_TRANSPORT", 00:21:30.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.384 "adrfam": "ipv4", 00:21:30.384 "trsvcid": "$NVMF_PORT", 00:21:30.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.384 "hdgst": ${hdgst:-false}, 00:21:30.384 "ddgst": ${ddgst:-false} 00:21:30.384 }, 00:21:30.384 "method": "bdev_nvme_attach_controller" 00:21:30.384 } 00:21:30.384 EOF 00:21:30.384 )") 00:21:30.384 06:16:36 -- nvmf/common.sh@542 -- # cat 00:21:30.384 06:16:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:30.384 06:16:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:30.384 { 00:21:30.384 "params": { 00:21:30.384 "name": "Nvme$subsystem", 00:21:30.384 "trtype": "$TEST_TRANSPORT", 00:21:30.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.384 "adrfam": "ipv4", 00:21:30.384 "trsvcid": "$NVMF_PORT", 00:21:30.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.384 "hdgst": ${hdgst:-false}, 00:21:30.384 "ddgst": ${ddgst:-false} 00:21:30.384 }, 00:21:30.384 "method": "bdev_nvme_attach_controller" 00:21:30.384 } 00:21:30.384 EOF 00:21:30.384 )") 00:21:30.384 06:16:36 -- nvmf/common.sh@542 -- # cat 00:21:30.384 06:16:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:30.384 06:16:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:30.384 { 00:21:30.384 "params": { 00:21:30.384 "name": "Nvme$subsystem", 00:21:30.384 "trtype": "$TEST_TRANSPORT", 00:21:30.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.384 "adrfam": "ipv4", 00:21:30.384 "trsvcid": "$NVMF_PORT", 00:21:30.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.384 "hdgst": ${hdgst:-false}, 00:21:30.384 "ddgst": ${ddgst:-false} 00:21:30.384 }, 00:21:30.384 "method": "bdev_nvme_attach_controller" 00:21:30.384 } 00:21:30.384 EOF 00:21:30.384 )") 00:21:30.384 06:16:36 -- nvmf/common.sh@542 -- # cat 00:21:30.384 06:16:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:30.384 06:16:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:30.384 { 00:21:30.384 "params": { 00:21:30.384 "name": "Nvme$subsystem", 00:21:30.384 "trtype": "$TEST_TRANSPORT", 00:21:30.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.384 "adrfam": "ipv4", 00:21:30.384 "trsvcid": "$NVMF_PORT", 00:21:30.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.384 "hdgst": ${hdgst:-false}, 00:21:30.384 "ddgst": ${ddgst:-false} 00:21:30.384 }, 00:21:30.384 "method": "bdev_nvme_attach_controller" 00:21:30.384 } 00:21:30.384 EOF 00:21:30.384 )") 00:21:30.384 06:16:36 -- nvmf/common.sh@542 -- # cat 00:21:30.384 06:16:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:30.384 06:16:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:30.384 { 00:21:30.384 "params": { 00:21:30.384 "name": "Nvme$subsystem", 00:21:30.384 "trtype": "$TEST_TRANSPORT", 00:21:30.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.384 "adrfam": "ipv4", 00:21:30.384 "trsvcid": "$NVMF_PORT", 00:21:30.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.384 "hdgst": ${hdgst:-false}, 00:21:30.384 "ddgst": ${ddgst:-false} 00:21:30.384 }, 00:21:30.384 "method": "bdev_nvme_attach_controller" 00:21:30.384 } 00:21:30.384 EOF 00:21:30.384 )") 00:21:30.384 06:16:36 -- nvmf/common.sh@542 -- # cat 00:21:30.384 06:16:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:30.384 06:16:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:30.384 { 00:21:30.384 "params": { 00:21:30.384 "name": "Nvme$subsystem", 00:21:30.384 "trtype": "$TEST_TRANSPORT", 00:21:30.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.384 "adrfam": "ipv4", 00:21:30.384 "trsvcid": "$NVMF_PORT", 00:21:30.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.384 "hdgst": ${hdgst:-false}, 00:21:30.384 "ddgst": ${ddgst:-false} 00:21:30.384 }, 00:21:30.384 "method": "bdev_nvme_attach_controller" 00:21:30.384 } 00:21:30.384 EOF 00:21:30.384 )") 00:21:30.384 06:16:36 -- nvmf/common.sh@542 -- # cat 00:21:30.384 06:16:36 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:30.384 06:16:36 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:30.384 { 00:21:30.384 "params": { 00:21:30.384 "name": "Nvme$subsystem", 00:21:30.384 "trtype": "$TEST_TRANSPORT", 00:21:30.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:30.384 "adrfam": "ipv4", 00:21:30.384 "trsvcid": "$NVMF_PORT", 00:21:30.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:30.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:30.384 "hdgst": ${hdgst:-false}, 00:21:30.384 "ddgst": ${ddgst:-false} 00:21:30.384 }, 00:21:30.384 "method": "bdev_nvme_attach_controller" 00:21:30.384 } 00:21:30.384 EOF 00:21:30.384 )") 00:21:30.384 06:16:36 -- nvmf/common.sh@542 -- # cat 00:21:30.384 06:16:36 -- nvmf/common.sh@544 -- # jq . 00:21:30.384 06:16:36 -- nvmf/common.sh@545 -- # IFS=, 00:21:30.384 06:16:36 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:30.384 "params": { 00:21:30.384 "name": "Nvme1", 00:21:30.384 "trtype": "tcp", 00:21:30.384 "traddr": "10.0.0.2", 00:21:30.384 "adrfam": "ipv4", 00:21:30.384 "trsvcid": "4420", 00:21:30.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:30.384 "hdgst": false, 00:21:30.384 "ddgst": false 00:21:30.384 }, 00:21:30.384 "method": "bdev_nvme_attach_controller" 00:21:30.384 },{ 00:21:30.384 "params": { 00:21:30.384 "name": "Nvme2", 00:21:30.384 "trtype": "tcp", 00:21:30.384 "traddr": "10.0.0.2", 00:21:30.384 "adrfam": "ipv4", 00:21:30.384 "trsvcid": "4420", 00:21:30.384 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:30.384 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:30.384 "hdgst": false, 00:21:30.384 "ddgst": false 00:21:30.384 }, 00:21:30.384 "method": "bdev_nvme_attach_controller" 00:21:30.384 },{ 00:21:30.384 "params": { 00:21:30.384 "name": "Nvme3", 00:21:30.384 "trtype": "tcp", 00:21:30.384 "traddr": "10.0.0.2", 00:21:30.384 "adrfam": "ipv4", 00:21:30.384 "trsvcid": "4420", 00:21:30.384 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:30.384 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:30.384 "hdgst": false, 00:21:30.384 "ddgst": false 00:21:30.384 }, 00:21:30.384 "method": "bdev_nvme_attach_controller" 00:21:30.384 },{ 00:21:30.384 "params": { 00:21:30.384 "name": "Nvme4", 00:21:30.384 "trtype": "tcp", 00:21:30.384 "traddr": "10.0.0.2", 00:21:30.384 "adrfam": "ipv4", 00:21:30.384 "trsvcid": "4420", 00:21:30.384 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:30.384 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:30.384 "hdgst": false, 00:21:30.384 "ddgst": false 00:21:30.384 }, 00:21:30.384 "method": "bdev_nvme_attach_controller" 00:21:30.384 },{ 00:21:30.384 "params": { 00:21:30.384 "name": "Nvme5", 00:21:30.384 "trtype": "tcp", 00:21:30.384 "traddr": "10.0.0.2", 00:21:30.384 "adrfam": "ipv4", 00:21:30.384 "trsvcid": "4420", 00:21:30.384 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:30.384 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:30.384 "hdgst": false, 00:21:30.384 "ddgst": false 00:21:30.384 }, 00:21:30.384 "method": "bdev_nvme_attach_controller" 00:21:30.384 },{ 00:21:30.384 "params": { 00:21:30.384 "name": "Nvme6", 00:21:30.384 "trtype": "tcp", 00:21:30.384 "traddr": "10.0.0.2", 00:21:30.384 "adrfam": "ipv4", 00:21:30.384 "trsvcid": "4420", 00:21:30.384 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:30.384 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:30.384 "hdgst": false, 00:21:30.384 "ddgst": false 00:21:30.384 }, 00:21:30.384 "method": "bdev_nvme_attach_controller" 00:21:30.384 },{ 00:21:30.384 "params": { 00:21:30.384 "name": "Nvme7", 00:21:30.384 "trtype": "tcp", 00:21:30.384 "traddr": "10.0.0.2", 00:21:30.384 "adrfam": "ipv4", 00:21:30.385 "trsvcid": "4420", 00:21:30.385 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:30.385 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:30.385 "hdgst": false, 00:21:30.385 "ddgst": false 00:21:30.385 }, 00:21:30.385 "method": "bdev_nvme_attach_controller" 00:21:30.385 },{ 00:21:30.385 "params": { 00:21:30.385 "name": "Nvme8", 00:21:30.385 "trtype": "tcp", 00:21:30.385 "traddr": "10.0.0.2", 00:21:30.385 "adrfam": "ipv4", 00:21:30.385 "trsvcid": "4420", 00:21:30.385 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:30.385 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:30.385 "hdgst": false, 00:21:30.385 "ddgst": false 00:21:30.385 }, 00:21:30.385 "method": "bdev_nvme_attach_controller" 00:21:30.385 },{ 00:21:30.385 "params": { 00:21:30.385 "name": "Nvme9", 00:21:30.385 "trtype": "tcp", 00:21:30.385 "traddr": "10.0.0.2", 00:21:30.385 "adrfam": "ipv4", 00:21:30.385 "trsvcid": "4420", 00:21:30.385 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:30.385 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:30.385 "hdgst": false, 00:21:30.385 "ddgst": false 00:21:30.385 }, 00:21:30.385 "method": "bdev_nvme_attach_controller" 00:21:30.385 },{ 00:21:30.385 "params": { 00:21:30.385 "name": "Nvme10", 00:21:30.385 "trtype": "tcp", 00:21:30.385 "traddr": "10.0.0.2", 00:21:30.385 "adrfam": "ipv4", 00:21:30.385 "trsvcid": "4420", 00:21:30.385 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:30.385 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:30.385 "hdgst": false, 00:21:30.385 "ddgst": false 00:21:30.385 }, 00:21:30.385 "method": "bdev_nvme_attach_controller" 00:21:30.385 }' 00:21:30.385 [2024-07-13 06:16:36.729907] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:30.385 [2024-07-13 06:16:36.729985] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:30.385 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.385 [2024-07-13 06:16:36.793700] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.647 [2024-07-13 06:16:36.902410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.182 06:16:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:33.182 06:16:39 -- common/autotest_common.sh@852 -- # return 0 00:21:33.182 06:16:39 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:33.182 06:16:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.182 06:16:39 -- common/autotest_common.sh@10 -- # set +x 00:21:33.182 06:16:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.182 06:16:39 -- target/shutdown.sh@83 -- # kill -9 1178135 00:21:33.182 06:16:39 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:33.182 06:16:39 -- target/shutdown.sh@87 -- # sleep 1 00:21:33.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1178135 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:33.751 06:16:40 -- target/shutdown.sh@88 -- # kill -0 1177823 00:21:33.751 06:16:40 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:33.751 06:16:40 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:33.751 06:16:40 -- nvmf/common.sh@520 -- # config=() 00:21:33.751 06:16:40 -- nvmf/common.sh@520 -- # local subsystem config 00:21:33.751 06:16:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:33.751 { 00:21:33.751 "params": { 00:21:33.751 "name": "Nvme$subsystem", 00:21:33.751 "trtype": "$TEST_TRANSPORT", 00:21:33.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.751 "adrfam": "ipv4", 00:21:33.751 "trsvcid": "$NVMF_PORT", 00:21:33.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.751 "hdgst": ${hdgst:-false}, 00:21:33.751 "ddgst": ${ddgst:-false} 00:21:33.751 }, 00:21:33.751 "method": "bdev_nvme_attach_controller" 00:21:33.751 } 00:21:33.751 EOF 00:21:33.751 )") 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # cat 00:21:33.751 06:16:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:33.751 { 00:21:33.751 "params": { 00:21:33.751 "name": "Nvme$subsystem", 00:21:33.751 "trtype": "$TEST_TRANSPORT", 00:21:33.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.751 "adrfam": "ipv4", 00:21:33.751 "trsvcid": "$NVMF_PORT", 00:21:33.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.751 "hdgst": ${hdgst:-false}, 00:21:33.751 "ddgst": ${ddgst:-false} 00:21:33.751 }, 00:21:33.751 "method": "bdev_nvme_attach_controller" 00:21:33.751 } 00:21:33.751 EOF 00:21:33.751 )") 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # cat 00:21:33.751 06:16:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:33.751 { 00:21:33.751 "params": { 00:21:33.751 "name": "Nvme$subsystem", 00:21:33.751 "trtype": "$TEST_TRANSPORT", 00:21:33.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.751 "adrfam": "ipv4", 00:21:33.751 "trsvcid": "$NVMF_PORT", 00:21:33.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.751 "hdgst": ${hdgst:-false}, 00:21:33.751 "ddgst": ${ddgst:-false} 00:21:33.751 }, 00:21:33.751 "method": "bdev_nvme_attach_controller" 00:21:33.751 } 00:21:33.751 EOF 00:21:33.751 )") 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # cat 00:21:33.751 06:16:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:33.751 { 00:21:33.751 "params": { 00:21:33.751 "name": "Nvme$subsystem", 00:21:33.751 "trtype": "$TEST_TRANSPORT", 00:21:33.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.751 "adrfam": "ipv4", 00:21:33.751 "trsvcid": "$NVMF_PORT", 00:21:33.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.751 "hdgst": ${hdgst:-false}, 00:21:33.751 "ddgst": ${ddgst:-false} 00:21:33.751 }, 00:21:33.751 "method": "bdev_nvme_attach_controller" 00:21:33.751 } 00:21:33.751 EOF 00:21:33.751 )") 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # cat 00:21:33.751 06:16:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:33.751 { 00:21:33.751 "params": { 00:21:33.751 "name": "Nvme$subsystem", 00:21:33.751 "trtype": "$TEST_TRANSPORT", 00:21:33.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.751 "adrfam": "ipv4", 00:21:33.751 "trsvcid": "$NVMF_PORT", 00:21:33.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.751 "hdgst": ${hdgst:-false}, 00:21:33.751 "ddgst": ${ddgst:-false} 00:21:33.751 }, 00:21:33.751 "method": "bdev_nvme_attach_controller" 00:21:33.751 } 00:21:33.751 EOF 00:21:33.751 )") 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # cat 00:21:33.751 06:16:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:33.751 { 00:21:33.751 "params": { 00:21:33.751 "name": "Nvme$subsystem", 00:21:33.751 "trtype": "$TEST_TRANSPORT", 00:21:33.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.751 "adrfam": "ipv4", 00:21:33.751 "trsvcid": "$NVMF_PORT", 00:21:33.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.751 "hdgst": ${hdgst:-false}, 00:21:33.751 "ddgst": ${ddgst:-false} 00:21:33.751 }, 00:21:33.751 "method": "bdev_nvme_attach_controller" 00:21:33.751 } 00:21:33.751 EOF 00:21:33.751 )") 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # cat 00:21:33.751 06:16:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:33.751 { 00:21:33.751 "params": { 00:21:33.751 "name": "Nvme$subsystem", 00:21:33.751 "trtype": "$TEST_TRANSPORT", 00:21:33.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.751 "adrfam": "ipv4", 00:21:33.751 "trsvcid": "$NVMF_PORT", 00:21:33.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.751 "hdgst": ${hdgst:-false}, 00:21:33.751 "ddgst": ${ddgst:-false} 00:21:33.751 }, 00:21:33.751 "method": "bdev_nvme_attach_controller" 00:21:33.751 } 00:21:33.751 EOF 00:21:33.751 )") 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # cat 00:21:33.751 06:16:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:33.751 { 00:21:33.751 "params": { 00:21:33.751 "name": "Nvme$subsystem", 00:21:33.751 "trtype": "$TEST_TRANSPORT", 00:21:33.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.751 "adrfam": "ipv4", 00:21:33.751 "trsvcid": "$NVMF_PORT", 00:21:33.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.751 "hdgst": ${hdgst:-false}, 00:21:33.751 "ddgst": ${ddgst:-false} 00:21:33.751 }, 00:21:33.751 "method": "bdev_nvme_attach_controller" 00:21:33.751 } 00:21:33.751 EOF 00:21:33.751 )") 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # cat 00:21:33.751 06:16:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:33.751 { 00:21:33.751 "params": { 00:21:33.751 "name": "Nvme$subsystem", 00:21:33.751 "trtype": "$TEST_TRANSPORT", 00:21:33.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.751 "adrfam": "ipv4", 00:21:33.751 "trsvcid": "$NVMF_PORT", 00:21:33.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.751 "hdgst": ${hdgst:-false}, 00:21:33.751 "ddgst": ${ddgst:-false} 00:21:33.751 }, 00:21:33.751 "method": "bdev_nvme_attach_controller" 00:21:33.751 } 00:21:33.751 EOF 00:21:33.751 )") 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # cat 00:21:33.751 06:16:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:33.751 { 00:21:33.751 "params": { 00:21:33.751 "name": "Nvme$subsystem", 00:21:33.751 "trtype": "$TEST_TRANSPORT", 00:21:33.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:33.751 "adrfam": "ipv4", 00:21:33.751 "trsvcid": "$NVMF_PORT", 00:21:33.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:33.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:33.751 "hdgst": ${hdgst:-false}, 00:21:33.751 "ddgst": ${ddgst:-false} 00:21:33.751 }, 00:21:33.751 "method": "bdev_nvme_attach_controller" 00:21:33.751 } 00:21:33.751 EOF 00:21:33.751 )") 00:21:33.751 06:16:40 -- nvmf/common.sh@542 -- # cat 00:21:33.751 06:16:40 -- nvmf/common.sh@544 -- # jq . 00:21:33.751 06:16:40 -- nvmf/common.sh@545 -- # IFS=, 00:21:33.751 06:16:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:33.751 "params": { 00:21:33.751 "name": "Nvme1", 00:21:33.751 "trtype": "tcp", 00:21:33.751 "traddr": "10.0.0.2", 00:21:33.751 "adrfam": "ipv4", 00:21:33.751 "trsvcid": "4420", 00:21:33.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:33.751 "hdgst": false, 00:21:33.751 "ddgst": false 00:21:33.751 }, 00:21:33.751 "method": "bdev_nvme_attach_controller" 00:21:33.751 },{ 00:21:33.751 "params": { 00:21:33.751 "name": "Nvme2", 00:21:33.751 "trtype": "tcp", 00:21:33.752 "traddr": "10.0.0.2", 00:21:33.752 "adrfam": "ipv4", 00:21:33.752 "trsvcid": "4420", 00:21:33.752 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:33.752 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:33.752 "hdgst": false, 00:21:33.752 "ddgst": false 00:21:33.752 }, 00:21:33.752 "method": "bdev_nvme_attach_controller" 00:21:33.752 },{ 00:21:33.752 "params": { 00:21:33.752 "name": "Nvme3", 00:21:33.752 "trtype": "tcp", 00:21:33.752 "traddr": "10.0.0.2", 00:21:33.752 "adrfam": "ipv4", 00:21:33.752 "trsvcid": "4420", 00:21:33.752 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:33.752 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:33.752 "hdgst": false, 00:21:33.752 "ddgst": false 00:21:33.752 }, 00:21:33.752 "method": "bdev_nvme_attach_controller" 00:21:33.752 },{ 00:21:33.752 "params": { 00:21:33.752 "name": "Nvme4", 00:21:33.752 "trtype": "tcp", 00:21:33.752 "traddr": "10.0.0.2", 00:21:33.752 "adrfam": "ipv4", 00:21:33.752 "trsvcid": "4420", 00:21:33.752 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:33.752 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:33.752 "hdgst": false, 00:21:33.752 "ddgst": false 00:21:33.752 }, 00:21:33.752 "method": "bdev_nvme_attach_controller" 00:21:33.752 },{ 00:21:33.752 "params": { 00:21:33.752 "name": "Nvme5", 00:21:33.752 "trtype": "tcp", 00:21:33.752 "traddr": "10.0.0.2", 00:21:33.752 "adrfam": "ipv4", 00:21:33.752 "trsvcid": "4420", 00:21:33.752 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:33.752 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:33.752 "hdgst": false, 00:21:33.752 "ddgst": false 00:21:33.752 }, 00:21:33.752 "method": "bdev_nvme_attach_controller" 00:21:33.752 },{ 00:21:33.752 "params": { 00:21:33.752 "name": "Nvme6", 00:21:33.752 "trtype": "tcp", 00:21:33.752 "traddr": "10.0.0.2", 00:21:33.752 "adrfam": "ipv4", 00:21:33.752 "trsvcid": "4420", 00:21:33.752 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:33.752 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:33.752 "hdgst": false, 00:21:33.752 "ddgst": false 00:21:33.752 }, 00:21:33.752 "method": "bdev_nvme_attach_controller" 00:21:33.752 },{ 00:21:33.752 "params": { 00:21:33.752 "name": "Nvme7", 00:21:33.752 "trtype": "tcp", 00:21:33.752 "traddr": "10.0.0.2", 00:21:33.752 "adrfam": "ipv4", 00:21:33.752 "trsvcid": "4420", 00:21:33.752 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:33.752 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:33.752 "hdgst": false, 00:21:33.752 "ddgst": false 00:21:33.752 }, 00:21:33.752 "method": "bdev_nvme_attach_controller" 00:21:33.752 },{ 00:21:33.752 "params": { 00:21:33.752 "name": "Nvme8", 00:21:33.752 "trtype": "tcp", 00:21:33.752 "traddr": "10.0.0.2", 00:21:33.752 "adrfam": "ipv4", 00:21:33.752 "trsvcid": "4420", 00:21:33.752 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:33.752 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:33.752 "hdgst": false, 00:21:33.752 "ddgst": false 00:21:33.752 }, 00:21:33.752 "method": "bdev_nvme_attach_controller" 00:21:33.752 },{ 00:21:33.752 "params": { 00:21:33.752 "name": "Nvme9", 00:21:33.752 "trtype": "tcp", 00:21:33.752 "traddr": "10.0.0.2", 00:21:33.752 "adrfam": "ipv4", 00:21:33.752 "trsvcid": "4420", 00:21:33.752 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:33.752 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:33.752 "hdgst": false, 00:21:33.752 "ddgst": false 00:21:33.752 }, 00:21:33.752 "method": "bdev_nvme_attach_controller" 00:21:33.752 },{ 00:21:33.752 "params": { 00:21:33.752 "name": "Nvme10", 00:21:33.752 "trtype": "tcp", 00:21:33.752 "traddr": "10.0.0.2", 00:21:33.752 "adrfam": "ipv4", 00:21:33.752 "trsvcid": "4420", 00:21:33.752 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:33.752 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:33.752 "hdgst": false, 00:21:33.752 "ddgst": false 00:21:33.752 }, 00:21:33.752 "method": "bdev_nvme_attach_controller" 00:21:33.752 }' 00:21:33.752 [2024-07-13 06:16:40.222839] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:33.752 [2024-07-13 06:16:40.222957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1178573 ] 00:21:33.752 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.010 [2024-07-13 06:16:40.288158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.010 [2024-07-13 06:16:40.396055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.390 Running I/O for 1 seconds... 00:21:36.765 00:21:36.765 Latency(us) 00:21:36.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.765 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.765 Verification LBA range: start 0x0 length 0x400 00:21:36.765 Nvme1n1 : 1.06 408.86 25.55 0.00 0.00 152944.40 25049.32 118838.61 00:21:36.765 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.765 Verification LBA range: start 0x0 length 0x400 00:21:36.765 Nvme2n1 : 1.06 375.41 23.46 0.00 0.00 164770.02 13398.47 153791.15 00:21:36.765 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.765 Verification LBA range: start 0x0 length 0x400 00:21:36.765 Nvme3n1 : 1.05 411.77 25.74 0.00 0.00 149791.76 23981.32 116508.44 00:21:36.765 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.765 Verification LBA range: start 0x0 length 0x400 00:21:36.765 Nvme4n1 : 1.07 407.11 25.44 0.00 0.00 150341.75 25049.32 116508.44 00:21:36.765 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.765 Verification LBA range: start 0x0 length 0x400 00:21:36.765 Nvme5n1 : 1.07 405.18 25.32 0.00 0.00 150683.70 18835.53 116508.44 00:21:36.765 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.765 Verification LBA range: start 0x0 length 0x400 00:21:36.765 Nvme6n1 : 1.08 369.23 23.08 0.00 0.00 162993.02 37865.24 137479.96 00:21:36.765 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.765 Verification LBA range: start 0x0 length 0x400 00:21:36.765 Nvme7n1 : 1.08 403.64 25.23 0.00 0.00 149179.47 18252.99 118061.89 00:21:36.765 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.765 Verification LBA range: start 0x0 length 0x400 00:21:36.765 Nvme8n1 : 1.08 401.73 25.11 0.00 0.00 149496.46 12621.75 120392.06 00:21:36.765 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.765 Verification LBA range: start 0x0 length 0x400 00:21:36.765 Nvme9n1 : 1.08 400.88 25.05 0.00 0.00 149090.04 9806.13 124275.67 00:21:36.765 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:36.765 Verification LBA range: start 0x0 length 0x400 00:21:36.765 Nvme10n1 : 1.09 403.12 25.20 0.00 0.00 147959.56 3398.16 145247.19 00:21:36.765 =================================================================================================================== 00:21:36.765 Total : 3986.92 249.18 0.00 0.00 152534.32 3398.16 153791.15 00:21:36.765 06:16:43 -- target/shutdown.sh@93 -- # stoptarget 00:21:36.765 06:16:43 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:36.765 06:16:43 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:36.765 06:16:43 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:36.765 06:16:43 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:36.765 06:16:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:36.765 06:16:43 -- nvmf/common.sh@116 -- # sync 00:21:36.765 06:16:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:36.765 06:16:43 -- nvmf/common.sh@119 -- # set +e 00:21:36.765 06:16:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:36.765 06:16:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:36.765 rmmod nvme_tcp 00:21:36.765 rmmod nvme_fabrics 00:21:36.765 rmmod nvme_keyring 00:21:36.765 06:16:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:36.765 06:16:43 -- nvmf/common.sh@123 -- # set -e 00:21:36.765 06:16:43 -- nvmf/common.sh@124 -- # return 0 00:21:36.765 06:16:43 -- nvmf/common.sh@477 -- # '[' -n 1177823 ']' 00:21:36.765 06:16:43 -- nvmf/common.sh@478 -- # killprocess 1177823 00:21:36.765 06:16:43 -- common/autotest_common.sh@926 -- # '[' -z 1177823 ']' 00:21:36.765 06:16:43 -- common/autotest_common.sh@930 -- # kill -0 1177823 00:21:36.765 06:16:43 -- common/autotest_common.sh@931 -- # uname 00:21:36.765 06:16:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:36.765 06:16:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1177823 00:21:37.023 06:16:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:37.023 06:16:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:37.023 06:16:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1177823' 00:21:37.024 killing process with pid 1177823 00:21:37.024 06:16:43 -- common/autotest_common.sh@945 -- # kill 1177823 00:21:37.024 06:16:43 -- common/autotest_common.sh@950 -- # wait 1177823 00:21:37.588 06:16:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:37.588 06:16:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:37.588 06:16:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:37.588 06:16:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:37.588 06:16:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:37.588 06:16:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.588 06:16:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.588 06:16:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.498 06:16:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:39.498 00:21:39.498 real 0m12.992s 00:21:39.498 user 0m39.444s 00:21:39.498 sys 0m3.346s 00:21:39.498 06:16:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:39.498 06:16:45 -- common/autotest_common.sh@10 -- # set +x 00:21:39.498 ************************************ 00:21:39.498 END TEST nvmf_shutdown_tc1 00:21:39.498 ************************************ 00:21:39.498 06:16:45 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:39.498 06:16:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:39.498 06:16:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:39.498 06:16:45 -- common/autotest_common.sh@10 -- # set +x 00:21:39.498 ************************************ 00:21:39.498 START TEST nvmf_shutdown_tc2 00:21:39.498 ************************************ 00:21:39.498 06:16:45 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:21:39.498 06:16:45 -- target/shutdown.sh@98 -- # starttarget 00:21:39.498 06:16:45 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:39.498 06:16:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:39.498 06:16:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.498 06:16:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:39.498 06:16:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:39.498 06:16:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:39.498 06:16:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.498 06:16:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.498 06:16:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.498 06:16:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:39.498 06:16:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:39.498 06:16:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:39.498 06:16:45 -- common/autotest_common.sh@10 -- # set +x 00:21:39.498 06:16:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:39.498 06:16:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:39.498 06:16:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:39.498 06:16:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:39.498 06:16:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:39.498 06:16:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:39.498 06:16:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:39.498 06:16:45 -- nvmf/common.sh@294 -- # net_devs=() 00:21:39.498 06:16:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:39.498 06:16:45 -- nvmf/common.sh@295 -- # e810=() 00:21:39.498 06:16:45 -- nvmf/common.sh@295 -- # local -ga e810 00:21:39.498 06:16:45 -- nvmf/common.sh@296 -- # x722=() 00:21:39.498 06:16:45 -- nvmf/common.sh@296 -- # local -ga x722 00:21:39.498 06:16:45 -- nvmf/common.sh@297 -- # mlx=() 00:21:39.498 06:16:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:39.498 06:16:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:39.498 06:16:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:39.498 06:16:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:39.498 06:16:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:39.498 06:16:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:39.498 06:16:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:39.498 06:16:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:39.498 06:16:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:39.498 06:16:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:39.498 06:16:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:39.498 06:16:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:39.498 06:16:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:39.498 06:16:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:39.498 06:16:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:39.498 06:16:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:39.498 06:16:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:39.498 06:16:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:39.498 06:16:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:39.498 06:16:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:39.498 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:39.498 06:16:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:39.498 06:16:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:39.498 06:16:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.498 06:16:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.498 06:16:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:39.498 06:16:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:39.498 06:16:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:39.498 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:39.498 06:16:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:39.498 06:16:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:39.498 06:16:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:39.498 06:16:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:39.498 06:16:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:39.498 06:16:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:39.498 06:16:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:39.498 06:16:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:39.498 06:16:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:39.498 06:16:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.498 06:16:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:39.498 06:16:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.498 06:16:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:39.498 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:39.498 06:16:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.498 06:16:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:39.498 06:16:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:39.498 06:16:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:39.498 06:16:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:39.498 06:16:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:39.498 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:39.498 06:16:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:39.498 06:16:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:39.498 06:16:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:39.498 06:16:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:39.498 06:16:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:39.498 06:16:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:39.498 06:16:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:39.498 06:16:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:39.498 06:16:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:39.498 06:16:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:39.498 06:16:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:39.498 06:16:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:39.498 06:16:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:39.498 06:16:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:39.498 06:16:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:39.498 06:16:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:39.498 06:16:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:39.498 06:16:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.498 06:16:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.498 06:16:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.498 06:16:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.498 06:16:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:39.498 06:16:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.755 06:16:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.755 06:16:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.755 06:16:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:39.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:21:39.755 00:21:39.755 --- 10.0.0.2 ping statistics --- 00:21:39.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.755 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:21:39.755 06:16:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:21:39.755 00:21:39.755 --- 10.0.0.1 ping statistics --- 00:21:39.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.755 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:21:39.755 06:16:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.755 06:16:46 -- nvmf/common.sh@410 -- # return 0 00:21:39.755 06:16:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:39.755 06:16:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.755 06:16:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:39.755 06:16:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:39.755 06:16:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.755 06:16:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:39.755 06:16:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:39.755 06:16:46 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:39.755 06:16:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:39.755 06:16:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:39.755 06:16:46 -- common/autotest_common.sh@10 -- # set +x 00:21:39.755 06:16:46 -- nvmf/common.sh@469 -- # nvmfpid=1179361 00:21:39.755 06:16:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:39.755 06:16:46 -- nvmf/common.sh@470 -- # waitforlisten 1179361 00:21:39.755 06:16:46 -- common/autotest_common.sh@819 -- # '[' -z 1179361 ']' 00:21:39.755 06:16:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.755 06:16:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:39.755 06:16:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.755 06:16:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:39.755 06:16:46 -- common/autotest_common.sh@10 -- # set +x 00:21:39.755 [2024-07-13 06:16:46.114687] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:39.755 [2024-07-13 06:16:46.114785] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.755 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.755 [2024-07-13 06:16:46.184129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:40.013 [2024-07-13 06:16:46.300374] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:40.013 [2024-07-13 06:16:46.300549] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.013 [2024-07-13 06:16:46.300569] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.013 [2024-07-13 06:16:46.300584] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.013 [2024-07-13 06:16:46.300679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.013 [2024-07-13 06:16:46.300793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:40.013 [2024-07-13 06:16:46.300894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:40.013 [2024-07-13 06:16:46.300899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.580 06:16:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:40.580 06:16:47 -- common/autotest_common.sh@852 -- # return 0 00:21:40.580 06:16:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:40.580 06:16:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:40.580 06:16:47 -- common/autotest_common.sh@10 -- # set +x 00:21:40.836 06:16:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.837 06:16:47 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:40.837 06:16:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.837 06:16:47 -- common/autotest_common.sh@10 -- # set +x 00:21:40.837 [2024-07-13 06:16:47.106368] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.837 06:16:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.837 06:16:47 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:40.837 06:16:47 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:40.837 06:16:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:40.837 06:16:47 -- common/autotest_common.sh@10 -- # set +x 00:21:40.837 06:16:47 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:40.837 06:16:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.837 06:16:47 -- target/shutdown.sh@28 -- # cat 00:21:40.837 06:16:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.837 06:16:47 -- target/shutdown.sh@28 -- # cat 00:21:40.837 06:16:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.837 06:16:47 -- target/shutdown.sh@28 -- # cat 00:21:40.837 06:16:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.837 06:16:47 -- target/shutdown.sh@28 -- # cat 00:21:40.837 06:16:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.837 06:16:47 -- target/shutdown.sh@28 -- # cat 00:21:40.837 06:16:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.837 06:16:47 -- target/shutdown.sh@28 -- # cat 00:21:40.837 06:16:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.837 06:16:47 -- target/shutdown.sh@28 -- # cat 00:21:40.837 06:16:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.837 06:16:47 -- target/shutdown.sh@28 -- # cat 00:21:40.837 06:16:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.837 06:16:47 -- target/shutdown.sh@28 -- # cat 00:21:40.837 06:16:47 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.837 06:16:47 -- target/shutdown.sh@28 -- # cat 00:21:40.837 06:16:47 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:40.837 06:16:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.837 06:16:47 -- common/autotest_common.sh@10 -- # set +x 00:21:40.837 Malloc1 00:21:40.837 [2024-07-13 06:16:47.181645] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.837 Malloc2 00:21:40.837 Malloc3 00:21:40.837 Malloc4 00:21:40.837 Malloc5 00:21:41.093 Malloc6 00:21:41.093 Malloc7 00:21:41.093 Malloc8 00:21:41.093 Malloc9 00:21:41.093 Malloc10 00:21:41.351 06:16:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.351 06:16:47 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:41.351 06:16:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:41.351 06:16:47 -- common/autotest_common.sh@10 -- # set +x 00:21:41.351 06:16:47 -- target/shutdown.sh@102 -- # perfpid=1179555 00:21:41.351 06:16:47 -- target/shutdown.sh@103 -- # waitforlisten 1179555 /var/tmp/bdevperf.sock 00:21:41.351 06:16:47 -- common/autotest_common.sh@819 -- # '[' -z 1179555 ']' 00:21:41.351 06:16:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.351 06:16:47 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:41.351 06:16:47 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:41.351 06:16:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:41.351 06:16:47 -- nvmf/common.sh@520 -- # config=() 00:21:41.351 06:16:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.351 06:16:47 -- nvmf/common.sh@520 -- # local subsystem config 00:21:41.351 06:16:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:41.351 06:16:47 -- common/autotest_common.sh@10 -- # set +x 00:21:41.351 06:16:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.351 { 00:21:41.351 "params": { 00:21:41.351 "name": "Nvme$subsystem", 00:21:41.351 "trtype": "$TEST_TRANSPORT", 00:21:41.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.351 "adrfam": "ipv4", 00:21:41.351 "trsvcid": "$NVMF_PORT", 00:21:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.351 "hdgst": ${hdgst:-false}, 00:21:41.351 "ddgst": ${ddgst:-false} 00:21:41.351 }, 00:21:41.351 "method": "bdev_nvme_attach_controller" 00:21:41.351 } 00:21:41.351 EOF 00:21:41.351 )") 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # cat 00:21:41.351 06:16:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.351 { 00:21:41.351 "params": { 00:21:41.351 "name": "Nvme$subsystem", 00:21:41.351 "trtype": "$TEST_TRANSPORT", 00:21:41.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.351 "adrfam": "ipv4", 00:21:41.351 "trsvcid": "$NVMF_PORT", 00:21:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.351 "hdgst": ${hdgst:-false}, 00:21:41.351 "ddgst": ${ddgst:-false} 00:21:41.351 }, 00:21:41.351 "method": "bdev_nvme_attach_controller" 00:21:41.351 } 00:21:41.351 EOF 00:21:41.351 )") 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # cat 00:21:41.351 06:16:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.351 { 00:21:41.351 "params": { 00:21:41.351 "name": "Nvme$subsystem", 00:21:41.351 "trtype": "$TEST_TRANSPORT", 00:21:41.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.351 "adrfam": "ipv4", 00:21:41.351 "trsvcid": "$NVMF_PORT", 00:21:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.351 "hdgst": ${hdgst:-false}, 00:21:41.351 "ddgst": ${ddgst:-false} 00:21:41.351 }, 00:21:41.351 "method": "bdev_nvme_attach_controller" 00:21:41.351 } 00:21:41.351 EOF 00:21:41.351 )") 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # cat 00:21:41.351 06:16:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.351 { 00:21:41.351 "params": { 00:21:41.351 "name": "Nvme$subsystem", 00:21:41.351 "trtype": "$TEST_TRANSPORT", 00:21:41.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.351 "adrfam": "ipv4", 00:21:41.351 "trsvcid": "$NVMF_PORT", 00:21:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.351 "hdgst": ${hdgst:-false}, 00:21:41.351 "ddgst": ${ddgst:-false} 00:21:41.351 }, 00:21:41.351 "method": "bdev_nvme_attach_controller" 00:21:41.351 } 00:21:41.351 EOF 00:21:41.351 )") 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # cat 00:21:41.351 06:16:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.351 { 00:21:41.351 "params": { 00:21:41.351 "name": "Nvme$subsystem", 00:21:41.351 "trtype": "$TEST_TRANSPORT", 00:21:41.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.351 "adrfam": "ipv4", 00:21:41.351 "trsvcid": "$NVMF_PORT", 00:21:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.351 "hdgst": ${hdgst:-false}, 00:21:41.351 "ddgst": ${ddgst:-false} 00:21:41.351 }, 00:21:41.351 "method": "bdev_nvme_attach_controller" 00:21:41.351 } 00:21:41.351 EOF 00:21:41.351 )") 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # cat 00:21:41.351 06:16:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.351 { 00:21:41.351 "params": { 00:21:41.351 "name": "Nvme$subsystem", 00:21:41.351 "trtype": "$TEST_TRANSPORT", 00:21:41.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.351 "adrfam": "ipv4", 00:21:41.351 "trsvcid": "$NVMF_PORT", 00:21:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.351 "hdgst": ${hdgst:-false}, 00:21:41.351 "ddgst": ${ddgst:-false} 00:21:41.351 }, 00:21:41.351 "method": "bdev_nvme_attach_controller" 00:21:41.351 } 00:21:41.351 EOF 00:21:41.351 )") 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # cat 00:21:41.351 06:16:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.351 { 00:21:41.351 "params": { 00:21:41.351 "name": "Nvme$subsystem", 00:21:41.351 "trtype": "$TEST_TRANSPORT", 00:21:41.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.351 "adrfam": "ipv4", 00:21:41.351 "trsvcid": "$NVMF_PORT", 00:21:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.351 "hdgst": ${hdgst:-false}, 00:21:41.351 "ddgst": ${ddgst:-false} 00:21:41.351 }, 00:21:41.351 "method": "bdev_nvme_attach_controller" 00:21:41.351 } 00:21:41.351 EOF 00:21:41.351 )") 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # cat 00:21:41.351 06:16:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.351 { 00:21:41.351 "params": { 00:21:41.351 "name": "Nvme$subsystem", 00:21:41.351 "trtype": "$TEST_TRANSPORT", 00:21:41.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.351 "adrfam": "ipv4", 00:21:41.351 "trsvcid": "$NVMF_PORT", 00:21:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.351 "hdgst": ${hdgst:-false}, 00:21:41.351 "ddgst": ${ddgst:-false} 00:21:41.351 }, 00:21:41.351 "method": "bdev_nvme_attach_controller" 00:21:41.351 } 00:21:41.351 EOF 00:21:41.351 )") 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # cat 00:21:41.351 06:16:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.351 { 00:21:41.351 "params": { 00:21:41.351 "name": "Nvme$subsystem", 00:21:41.351 "trtype": "$TEST_TRANSPORT", 00:21:41.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.351 "adrfam": "ipv4", 00:21:41.351 "trsvcid": "$NVMF_PORT", 00:21:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.351 "hdgst": ${hdgst:-false}, 00:21:41.351 "ddgst": ${ddgst:-false} 00:21:41.351 }, 00:21:41.351 "method": "bdev_nvme_attach_controller" 00:21:41.351 } 00:21:41.351 EOF 00:21:41.351 )") 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # cat 00:21:41.351 06:16:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:41.351 { 00:21:41.351 "params": { 00:21:41.351 "name": "Nvme$subsystem", 00:21:41.351 "trtype": "$TEST_TRANSPORT", 00:21:41.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.351 "adrfam": "ipv4", 00:21:41.351 "trsvcid": "$NVMF_PORT", 00:21:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.351 "hdgst": ${hdgst:-false}, 00:21:41.351 "ddgst": ${ddgst:-false} 00:21:41.351 }, 00:21:41.351 "method": "bdev_nvme_attach_controller" 00:21:41.351 } 00:21:41.351 EOF 00:21:41.351 )") 00:21:41.351 06:16:47 -- nvmf/common.sh@542 -- # cat 00:21:41.351 06:16:47 -- nvmf/common.sh@544 -- # jq . 00:21:41.351 06:16:47 -- nvmf/common.sh@545 -- # IFS=, 00:21:41.351 06:16:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:41.351 "params": { 00:21:41.351 "name": "Nvme1", 00:21:41.351 "trtype": "tcp", 00:21:41.351 "traddr": "10.0.0.2", 00:21:41.351 "adrfam": "ipv4", 00:21:41.351 "trsvcid": "4420", 00:21:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:41.351 "hdgst": false, 00:21:41.351 "ddgst": false 00:21:41.351 }, 00:21:41.351 "method": "bdev_nvme_attach_controller" 00:21:41.351 },{ 00:21:41.351 "params": { 00:21:41.351 "name": "Nvme2", 00:21:41.351 "trtype": "tcp", 00:21:41.351 "traddr": "10.0.0.2", 00:21:41.351 "adrfam": "ipv4", 00:21:41.351 "trsvcid": "4420", 00:21:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:41.351 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:41.351 "hdgst": false, 00:21:41.351 "ddgst": false 00:21:41.351 }, 00:21:41.351 "method": "bdev_nvme_attach_controller" 00:21:41.351 },{ 00:21:41.351 "params": { 00:21:41.351 "name": "Nvme3", 00:21:41.351 "trtype": "tcp", 00:21:41.351 "traddr": "10.0.0.2", 00:21:41.351 "adrfam": "ipv4", 00:21:41.351 "trsvcid": "4420", 00:21:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:41.351 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:41.351 "hdgst": false, 00:21:41.351 "ddgst": false 00:21:41.351 }, 00:21:41.351 "method": "bdev_nvme_attach_controller" 00:21:41.351 },{ 00:21:41.351 "params": { 00:21:41.351 "name": "Nvme4", 00:21:41.351 "trtype": "tcp", 00:21:41.351 "traddr": "10.0.0.2", 00:21:41.351 "adrfam": "ipv4", 00:21:41.351 "trsvcid": "4420", 00:21:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:41.351 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:41.351 "hdgst": false, 00:21:41.351 "ddgst": false 00:21:41.351 }, 00:21:41.351 "method": "bdev_nvme_attach_controller" 00:21:41.351 },{ 00:21:41.351 "params": { 00:21:41.351 "name": "Nvme5", 00:21:41.351 "trtype": "tcp", 00:21:41.351 "traddr": "10.0.0.2", 00:21:41.351 "adrfam": "ipv4", 00:21:41.351 "trsvcid": "4420", 00:21:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:41.351 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:41.351 "hdgst": false, 00:21:41.351 "ddgst": false 00:21:41.351 }, 00:21:41.351 "method": "bdev_nvme_attach_controller" 00:21:41.351 },{ 00:21:41.351 "params": { 00:21:41.351 "name": "Nvme6", 00:21:41.351 "trtype": "tcp", 00:21:41.351 "traddr": "10.0.0.2", 00:21:41.351 "adrfam": "ipv4", 00:21:41.351 "trsvcid": "4420", 00:21:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:41.351 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:41.351 "hdgst": false, 00:21:41.351 "ddgst": false 00:21:41.351 }, 00:21:41.351 "method": "bdev_nvme_attach_controller" 00:21:41.351 },{ 00:21:41.351 "params": { 00:21:41.351 "name": "Nvme7", 00:21:41.351 "trtype": "tcp", 00:21:41.351 "traddr": "10.0.0.2", 00:21:41.351 "adrfam": "ipv4", 00:21:41.351 "trsvcid": "4420", 00:21:41.351 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:41.351 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:41.351 "hdgst": false, 00:21:41.351 "ddgst": false 00:21:41.351 }, 00:21:41.351 "method": "bdev_nvme_attach_controller" 00:21:41.351 },{ 00:21:41.352 "params": { 00:21:41.352 "name": "Nvme8", 00:21:41.352 "trtype": "tcp", 00:21:41.352 "traddr": "10.0.0.2", 00:21:41.352 "adrfam": "ipv4", 00:21:41.352 "trsvcid": "4420", 00:21:41.352 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:41.352 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:41.352 "hdgst": false, 00:21:41.352 "ddgst": false 00:21:41.352 }, 00:21:41.352 "method": "bdev_nvme_attach_controller" 00:21:41.352 },{ 00:21:41.352 "params": { 00:21:41.352 "name": "Nvme9", 00:21:41.352 "trtype": "tcp", 00:21:41.352 "traddr": "10.0.0.2", 00:21:41.352 "adrfam": "ipv4", 00:21:41.352 "trsvcid": "4420", 00:21:41.352 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:41.352 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:41.352 "hdgst": false, 00:21:41.352 "ddgst": false 00:21:41.352 }, 00:21:41.352 "method": "bdev_nvme_attach_controller" 00:21:41.352 },{ 00:21:41.352 "params": { 00:21:41.352 "name": "Nvme10", 00:21:41.352 "trtype": "tcp", 00:21:41.352 "traddr": "10.0.0.2", 00:21:41.352 "adrfam": "ipv4", 00:21:41.352 "trsvcid": "4420", 00:21:41.352 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:41.352 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:41.352 "hdgst": false, 00:21:41.352 "ddgst": false 00:21:41.352 }, 00:21:41.352 "method": "bdev_nvme_attach_controller" 00:21:41.352 }' 00:21:41.352 [2024-07-13 06:16:47.672475] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:41.352 [2024-07-13 06:16:47.672563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179555 ] 00:21:41.352 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.352 [2024-07-13 06:16:47.736661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.352 [2024-07-13 06:16:47.846274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.251 Running I/O for 10 seconds... 00:21:43.817 06:16:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:43.817 06:16:50 -- common/autotest_common.sh@852 -- # return 0 00:21:43.817 06:16:50 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:43.817 06:16:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.817 06:16:50 -- common/autotest_common.sh@10 -- # set +x 00:21:43.817 06:16:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.817 06:16:50 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:43.817 06:16:50 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:43.817 06:16:50 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:43.817 06:16:50 -- target/shutdown.sh@57 -- # local ret=1 00:21:43.817 06:16:50 -- target/shutdown.sh@58 -- # local i 00:21:43.817 06:16:50 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:43.817 06:16:50 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:43.817 06:16:50 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:43.817 06:16:50 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:43.817 06:16:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:43.817 06:16:50 -- common/autotest_common.sh@10 -- # set +x 00:21:43.817 06:16:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:43.817 06:16:50 -- target/shutdown.sh@60 -- # read_io_count=254 00:21:43.817 06:16:50 -- target/shutdown.sh@63 -- # '[' 254 -ge 100 ']' 00:21:43.817 06:16:50 -- target/shutdown.sh@64 -- # ret=0 00:21:43.817 06:16:50 -- target/shutdown.sh@65 -- # break 00:21:43.817 06:16:50 -- target/shutdown.sh@69 -- # return 0 00:21:43.817 06:16:50 -- target/shutdown.sh@109 -- # killprocess 1179555 00:21:43.817 06:16:50 -- common/autotest_common.sh@926 -- # '[' -z 1179555 ']' 00:21:43.817 06:16:50 -- common/autotest_common.sh@930 -- # kill -0 1179555 00:21:43.817 06:16:50 -- common/autotest_common.sh@931 -- # uname 00:21:43.817 06:16:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:43.817 06:16:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1179555 00:21:43.817 06:16:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:43.817 06:16:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:43.817 06:16:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1179555' 00:21:43.817 killing process with pid 1179555 00:21:43.817 06:16:50 -- common/autotest_common.sh@945 -- # kill 1179555 00:21:43.817 06:16:50 -- common/autotest_common.sh@950 -- # wait 1179555 00:21:44.078 Received shutdown signal, test time was about 0.900005 seconds 00:21:44.078 00:21:44.078 Latency(us) 00:21:44.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.078 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.078 Verification LBA range: start 0x0 length 0x400 00:21:44.078 Nvme1n1 : 0.85 415.94 26.00 0.00 0.00 149820.83 26020.22 117285.17 00:21:44.078 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.078 Verification LBA range: start 0x0 length 0x400 00:21:44.078 Nvme2n1 : 0.90 404.83 25.30 0.00 0.00 146224.63 14078.10 145247.19 00:21:44.078 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.078 Verification LBA range: start 0x0 length 0x400 00:21:44.078 Nvme3n1 : 0.86 414.84 25.93 0.00 0.00 147514.19 25049.32 117285.17 00:21:44.078 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.078 Verification LBA range: start 0x0 length 0x400 00:21:44.078 Nvme4n1 : 0.86 413.72 25.86 0.00 0.00 146812.26 23884.23 115731.72 00:21:44.078 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.078 Verification LBA range: start 0x0 length 0x400 00:21:44.078 Nvme5n1 : 0.86 412.91 25.81 0.00 0.00 146106.38 21554.06 118061.89 00:21:44.078 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.078 Verification LBA range: start 0x0 length 0x400 00:21:44.078 Nvme6n1 : 0.84 375.31 23.46 0.00 0.00 158125.85 23884.23 126605.84 00:21:44.078 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.078 Verification LBA range: start 0x0 length 0x400 00:21:44.078 Nvme7n1 : 0.86 411.52 25.72 0.00 0.00 144043.96 21165.70 121168.78 00:21:44.078 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.078 Verification LBA range: start 0x0 length 0x400 00:21:44.078 Nvme8n1 : 0.84 376.90 23.56 0.00 0.00 154892.74 26602.76 121168.78 00:21:44.078 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.078 Verification LBA range: start 0x0 length 0x400 00:21:44.078 Nvme9n1 : 0.84 372.89 23.31 0.00 0.00 154744.79 30292.20 123498.95 00:21:44.078 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.078 Verification LBA range: start 0x0 length 0x400 00:21:44.078 Nvme10n1 : 0.85 384.77 24.05 0.00 0.00 148319.54 10194.49 129712.73 00:21:44.078 =================================================================================================================== 00:21:44.078 Total : 3983.65 248.98 0.00 0.00 149442.54 10194.49 145247.19 00:21:44.336 06:16:50 -- target/shutdown.sh@112 -- # sleep 1 00:21:45.268 06:16:51 -- target/shutdown.sh@113 -- # kill -0 1179361 00:21:45.268 06:16:51 -- target/shutdown.sh@115 -- # stoptarget 00:21:45.268 06:16:51 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:45.268 06:16:51 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:45.268 06:16:51 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:45.268 06:16:51 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:45.268 06:16:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:45.268 06:16:51 -- nvmf/common.sh@116 -- # sync 00:21:45.268 06:16:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:45.268 06:16:51 -- nvmf/common.sh@119 -- # set +e 00:21:45.268 06:16:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:45.268 06:16:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:45.268 rmmod nvme_tcp 00:21:45.268 rmmod nvme_fabrics 00:21:45.268 rmmod nvme_keyring 00:21:45.268 06:16:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:45.268 06:16:51 -- nvmf/common.sh@123 -- # set -e 00:21:45.268 06:16:51 -- nvmf/common.sh@124 -- # return 0 00:21:45.268 06:16:51 -- nvmf/common.sh@477 -- # '[' -n 1179361 ']' 00:21:45.268 06:16:51 -- nvmf/common.sh@478 -- # killprocess 1179361 00:21:45.268 06:16:51 -- common/autotest_common.sh@926 -- # '[' -z 1179361 ']' 00:21:45.268 06:16:51 -- common/autotest_common.sh@930 -- # kill -0 1179361 00:21:45.268 06:16:51 -- common/autotest_common.sh@931 -- # uname 00:21:45.268 06:16:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:45.268 06:16:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1179361 00:21:45.268 06:16:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:45.268 06:16:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:45.268 06:16:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1179361' 00:21:45.268 killing process with pid 1179361 00:21:45.268 06:16:51 -- common/autotest_common.sh@945 -- # kill 1179361 00:21:45.268 06:16:51 -- common/autotest_common.sh@950 -- # wait 1179361 00:21:45.833 06:16:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:45.833 06:16:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:45.833 06:16:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:45.833 06:16:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.833 06:16:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:45.833 06:16:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.833 06:16:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.833 06:16:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.361 06:16:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:48.361 00:21:48.361 real 0m8.428s 00:21:48.361 user 0m26.611s 00:21:48.361 sys 0m1.604s 00:21:48.361 06:16:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:48.361 06:16:54 -- common/autotest_common.sh@10 -- # set +x 00:21:48.361 ************************************ 00:21:48.361 END TEST nvmf_shutdown_tc2 00:21:48.361 ************************************ 00:21:48.361 06:16:54 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:48.361 06:16:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:48.361 06:16:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:48.361 06:16:54 -- common/autotest_common.sh@10 -- # set +x 00:21:48.361 ************************************ 00:21:48.361 START TEST nvmf_shutdown_tc3 00:21:48.361 ************************************ 00:21:48.361 06:16:54 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:21:48.361 06:16:54 -- target/shutdown.sh@120 -- # starttarget 00:21:48.361 06:16:54 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:48.361 06:16:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:48.361 06:16:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.361 06:16:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:48.361 06:16:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:48.361 06:16:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:48.361 06:16:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.361 06:16:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:48.361 06:16:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.361 06:16:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:48.361 06:16:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:48.361 06:16:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:48.361 06:16:54 -- common/autotest_common.sh@10 -- # set +x 00:21:48.361 06:16:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:48.362 06:16:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:48.362 06:16:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:48.362 06:16:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:48.362 06:16:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:48.362 06:16:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:48.362 06:16:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:48.362 06:16:54 -- nvmf/common.sh@294 -- # net_devs=() 00:21:48.362 06:16:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:48.362 06:16:54 -- nvmf/common.sh@295 -- # e810=() 00:21:48.362 06:16:54 -- nvmf/common.sh@295 -- # local -ga e810 00:21:48.362 06:16:54 -- nvmf/common.sh@296 -- # x722=() 00:21:48.362 06:16:54 -- nvmf/common.sh@296 -- # local -ga x722 00:21:48.362 06:16:54 -- nvmf/common.sh@297 -- # mlx=() 00:21:48.362 06:16:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:48.362 06:16:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.362 06:16:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.362 06:16:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.362 06:16:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.362 06:16:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.362 06:16:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.362 06:16:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.362 06:16:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.362 06:16:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.362 06:16:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.362 06:16:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.362 06:16:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:48.362 06:16:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:48.362 06:16:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:48.362 06:16:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:48.362 06:16:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:48.362 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:48.362 06:16:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:48.362 06:16:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:48.362 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:48.362 06:16:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:48.362 06:16:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:48.362 06:16:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.362 06:16:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:48.362 06:16:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.362 06:16:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:48.362 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:48.362 06:16:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.362 06:16:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:48.362 06:16:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.362 06:16:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:48.362 06:16:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.362 06:16:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:48.362 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:48.362 06:16:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.362 06:16:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:48.362 06:16:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:48.362 06:16:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:48.362 06:16:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.362 06:16:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.362 06:16:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.362 06:16:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:48.362 06:16:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.362 06:16:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.362 06:16:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:48.362 06:16:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.362 06:16:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.362 06:16:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:48.362 06:16:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:48.362 06:16:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.362 06:16:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:48.362 06:16:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:48.362 06:16:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:48.362 06:16:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:48.362 06:16:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.362 06:16:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.362 06:16:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.362 06:16:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:48.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:21:48.362 00:21:48.362 --- 10.0.0.2 ping statistics --- 00:21:48.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.362 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:21:48.362 06:16:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:21:48.362 00:21:48.362 --- 10.0.0.1 ping statistics --- 00:21:48.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.362 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:21:48.362 06:16:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.362 06:16:54 -- nvmf/common.sh@410 -- # return 0 00:21:48.362 06:16:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:48.362 06:16:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.362 06:16:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:48.362 06:16:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.362 06:16:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:48.362 06:16:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:48.362 06:16:54 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:48.362 06:16:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:48.362 06:16:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:48.362 06:16:54 -- common/autotest_common.sh@10 -- # set +x 00:21:48.362 06:16:54 -- nvmf/common.sh@469 -- # nvmfpid=1180511 00:21:48.362 06:16:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:48.362 06:16:54 -- nvmf/common.sh@470 -- # waitforlisten 1180511 00:21:48.362 06:16:54 -- common/autotest_common.sh@819 -- # '[' -z 1180511 ']' 00:21:48.362 06:16:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.362 06:16:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:48.362 06:16:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.362 06:16:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:48.362 06:16:54 -- common/autotest_common.sh@10 -- # set +x 00:21:48.362 [2024-07-13 06:16:54.588723] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:48.362 [2024-07-13 06:16:54.588798] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.362 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.362 [2024-07-13 06:16:54.654028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:48.362 [2024-07-13 06:16:54.763393] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:48.362 [2024-07-13 06:16:54.763551] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.362 [2024-07-13 06:16:54.763568] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.362 [2024-07-13 06:16:54.763580] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.362 [2024-07-13 06:16:54.763666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.362 [2024-07-13 06:16:54.763709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:48.362 [2024-07-13 06:16:54.763765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:48.362 [2024-07-13 06:16:54.763768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.295 06:16:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:49.295 06:16:55 -- common/autotest_common.sh@852 -- # return 0 00:21:49.295 06:16:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:49.295 06:16:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:49.295 06:16:55 -- common/autotest_common.sh@10 -- # set +x 00:21:49.295 06:16:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.295 06:16:55 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:49.295 06:16:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.295 06:16:55 -- common/autotest_common.sh@10 -- # set +x 00:21:49.295 [2024-07-13 06:16:55.564331] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.295 06:16:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.295 06:16:55 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:49.295 06:16:55 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:49.295 06:16:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:49.295 06:16:55 -- common/autotest_common.sh@10 -- # set +x 00:21:49.295 06:16:55 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:49.295 06:16:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.295 06:16:55 -- target/shutdown.sh@28 -- # cat 00:21:49.295 06:16:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.295 06:16:55 -- target/shutdown.sh@28 -- # cat 00:21:49.295 06:16:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.295 06:16:55 -- target/shutdown.sh@28 -- # cat 00:21:49.295 06:16:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.295 06:16:55 -- target/shutdown.sh@28 -- # cat 00:21:49.295 06:16:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.295 06:16:55 -- target/shutdown.sh@28 -- # cat 00:21:49.295 06:16:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.295 06:16:55 -- target/shutdown.sh@28 -- # cat 00:21:49.295 06:16:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.295 06:16:55 -- target/shutdown.sh@28 -- # cat 00:21:49.295 06:16:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.295 06:16:55 -- target/shutdown.sh@28 -- # cat 00:21:49.295 06:16:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.295 06:16:55 -- target/shutdown.sh@28 -- # cat 00:21:49.295 06:16:55 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:49.295 06:16:55 -- target/shutdown.sh@28 -- # cat 00:21:49.295 06:16:55 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:49.295 06:16:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.295 06:16:55 -- common/autotest_common.sh@10 -- # set +x 00:21:49.295 Malloc1 00:21:49.295 [2024-07-13 06:16:55.639197] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.295 Malloc2 00:21:49.295 Malloc3 00:21:49.295 Malloc4 00:21:49.553 Malloc5 00:21:49.553 Malloc6 00:21:49.553 Malloc7 00:21:49.553 Malloc8 00:21:49.553 Malloc9 00:21:49.553 Malloc10 00:21:49.813 06:16:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.813 06:16:56 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:49.813 06:16:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:49.813 06:16:56 -- common/autotest_common.sh@10 -- # set +x 00:21:49.813 06:16:56 -- target/shutdown.sh@124 -- # perfpid=1180803 00:21:49.813 06:16:56 -- target/shutdown.sh@125 -- # waitforlisten 1180803 /var/tmp/bdevperf.sock 00:21:49.813 06:16:56 -- common/autotest_common.sh@819 -- # '[' -z 1180803 ']' 00:21:49.813 06:16:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.813 06:16:56 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:49.813 06:16:56 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:49.813 06:16:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:49.813 06:16:56 -- nvmf/common.sh@520 -- # config=() 00:21:49.813 06:16:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.813 06:16:56 -- nvmf/common.sh@520 -- # local subsystem config 00:21:49.813 06:16:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:49.813 06:16:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.813 06:16:56 -- common/autotest_common.sh@10 -- # set +x 00:21:49.813 06:16:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.813 { 00:21:49.813 "params": { 00:21:49.813 "name": "Nvme$subsystem", 00:21:49.813 "trtype": "$TEST_TRANSPORT", 00:21:49.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.813 "adrfam": "ipv4", 00:21:49.813 "trsvcid": "$NVMF_PORT", 00:21:49.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.813 "hdgst": ${hdgst:-false}, 00:21:49.813 "ddgst": ${ddgst:-false} 00:21:49.813 }, 00:21:49.813 "method": "bdev_nvme_attach_controller" 00:21:49.813 } 00:21:49.813 EOF 00:21:49.813 )") 00:21:49.813 06:16:56 -- nvmf/common.sh@542 -- # cat 00:21:49.813 06:16:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.813 06:16:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.813 { 00:21:49.813 "params": { 00:21:49.813 "name": "Nvme$subsystem", 00:21:49.813 "trtype": "$TEST_TRANSPORT", 00:21:49.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.813 "adrfam": "ipv4", 00:21:49.813 "trsvcid": "$NVMF_PORT", 00:21:49.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.813 "hdgst": ${hdgst:-false}, 00:21:49.813 "ddgst": ${ddgst:-false} 00:21:49.813 }, 00:21:49.813 "method": "bdev_nvme_attach_controller" 00:21:49.813 } 00:21:49.813 EOF 00:21:49.813 )") 00:21:49.813 06:16:56 -- nvmf/common.sh@542 -- # cat 00:21:49.813 06:16:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.813 06:16:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.813 { 00:21:49.813 "params": { 00:21:49.813 "name": "Nvme$subsystem", 00:21:49.813 "trtype": "$TEST_TRANSPORT", 00:21:49.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.813 "adrfam": "ipv4", 00:21:49.813 "trsvcid": "$NVMF_PORT", 00:21:49.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.813 "hdgst": ${hdgst:-false}, 00:21:49.813 "ddgst": ${ddgst:-false} 00:21:49.813 }, 00:21:49.813 "method": "bdev_nvme_attach_controller" 00:21:49.813 } 00:21:49.813 EOF 00:21:49.813 )") 00:21:49.813 06:16:56 -- nvmf/common.sh@542 -- # cat 00:21:49.813 06:16:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.813 06:16:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.813 { 00:21:49.813 "params": { 00:21:49.813 "name": "Nvme$subsystem", 00:21:49.813 "trtype": "$TEST_TRANSPORT", 00:21:49.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.813 "adrfam": "ipv4", 00:21:49.813 "trsvcid": "$NVMF_PORT", 00:21:49.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.813 "hdgst": ${hdgst:-false}, 00:21:49.813 "ddgst": ${ddgst:-false} 00:21:49.813 }, 00:21:49.813 "method": "bdev_nvme_attach_controller" 00:21:49.813 } 00:21:49.813 EOF 00:21:49.813 )") 00:21:49.813 06:16:56 -- nvmf/common.sh@542 -- # cat 00:21:49.813 06:16:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.813 06:16:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.813 { 00:21:49.813 "params": { 00:21:49.813 "name": "Nvme$subsystem", 00:21:49.813 "trtype": "$TEST_TRANSPORT", 00:21:49.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.813 "adrfam": "ipv4", 00:21:49.813 "trsvcid": "$NVMF_PORT", 00:21:49.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.813 "hdgst": ${hdgst:-false}, 00:21:49.813 "ddgst": ${ddgst:-false} 00:21:49.813 }, 00:21:49.813 "method": "bdev_nvme_attach_controller" 00:21:49.813 } 00:21:49.813 EOF 00:21:49.813 )") 00:21:49.813 06:16:56 -- nvmf/common.sh@542 -- # cat 00:21:49.813 06:16:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.813 06:16:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.813 { 00:21:49.813 "params": { 00:21:49.813 "name": "Nvme$subsystem", 00:21:49.813 "trtype": "$TEST_TRANSPORT", 00:21:49.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.813 "adrfam": "ipv4", 00:21:49.813 "trsvcid": "$NVMF_PORT", 00:21:49.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.813 "hdgst": ${hdgst:-false}, 00:21:49.813 "ddgst": ${ddgst:-false} 00:21:49.813 }, 00:21:49.813 "method": "bdev_nvme_attach_controller" 00:21:49.813 } 00:21:49.813 EOF 00:21:49.813 )") 00:21:49.813 06:16:56 -- nvmf/common.sh@542 -- # cat 00:21:49.813 06:16:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.813 06:16:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.813 { 00:21:49.813 "params": { 00:21:49.813 "name": "Nvme$subsystem", 00:21:49.813 "trtype": "$TEST_TRANSPORT", 00:21:49.813 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.813 "adrfam": "ipv4", 00:21:49.813 "trsvcid": "$NVMF_PORT", 00:21:49.813 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.813 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.813 "hdgst": ${hdgst:-false}, 00:21:49.813 "ddgst": ${ddgst:-false} 00:21:49.813 }, 00:21:49.813 "method": "bdev_nvme_attach_controller" 00:21:49.813 } 00:21:49.813 EOF 00:21:49.813 )") 00:21:49.813 06:16:56 -- nvmf/common.sh@542 -- # cat 00:21:49.814 06:16:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.814 06:16:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.814 { 00:21:49.814 "params": { 00:21:49.814 "name": "Nvme$subsystem", 00:21:49.814 "trtype": "$TEST_TRANSPORT", 00:21:49.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.814 "adrfam": "ipv4", 00:21:49.814 "trsvcid": "$NVMF_PORT", 00:21:49.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.814 "hdgst": ${hdgst:-false}, 00:21:49.814 "ddgst": ${ddgst:-false} 00:21:49.814 }, 00:21:49.814 "method": "bdev_nvme_attach_controller" 00:21:49.814 } 00:21:49.814 EOF 00:21:49.814 )") 00:21:49.814 06:16:56 -- nvmf/common.sh@542 -- # cat 00:21:49.814 06:16:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.814 06:16:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.814 { 00:21:49.814 "params": { 00:21:49.814 "name": "Nvme$subsystem", 00:21:49.814 "trtype": "$TEST_TRANSPORT", 00:21:49.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.814 "adrfam": "ipv4", 00:21:49.814 "trsvcid": "$NVMF_PORT", 00:21:49.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.814 "hdgst": ${hdgst:-false}, 00:21:49.814 "ddgst": ${ddgst:-false} 00:21:49.814 }, 00:21:49.814 "method": "bdev_nvme_attach_controller" 00:21:49.814 } 00:21:49.814 EOF 00:21:49.814 )") 00:21:49.814 06:16:56 -- nvmf/common.sh@542 -- # cat 00:21:49.814 06:16:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:49.814 06:16:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:49.814 { 00:21:49.814 "params": { 00:21:49.814 "name": "Nvme$subsystem", 00:21:49.814 "trtype": "$TEST_TRANSPORT", 00:21:49.814 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.814 "adrfam": "ipv4", 00:21:49.814 "trsvcid": "$NVMF_PORT", 00:21:49.814 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.814 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.814 "hdgst": ${hdgst:-false}, 00:21:49.814 "ddgst": ${ddgst:-false} 00:21:49.814 }, 00:21:49.814 "method": "bdev_nvme_attach_controller" 00:21:49.814 } 00:21:49.814 EOF 00:21:49.814 )") 00:21:49.814 06:16:56 -- nvmf/common.sh@542 -- # cat 00:21:49.814 06:16:56 -- nvmf/common.sh@544 -- # jq . 00:21:49.814 06:16:56 -- nvmf/common.sh@545 -- # IFS=, 00:21:49.814 06:16:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:49.814 "params": { 00:21:49.814 "name": "Nvme1", 00:21:49.814 "trtype": "tcp", 00:21:49.814 "traddr": "10.0.0.2", 00:21:49.814 "adrfam": "ipv4", 00:21:49.814 "trsvcid": "4420", 00:21:49.814 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:49.814 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:49.814 "hdgst": false, 00:21:49.814 "ddgst": false 00:21:49.814 }, 00:21:49.814 "method": "bdev_nvme_attach_controller" 00:21:49.814 },{ 00:21:49.814 "params": { 00:21:49.814 "name": "Nvme2", 00:21:49.814 "trtype": "tcp", 00:21:49.814 "traddr": "10.0.0.2", 00:21:49.814 "adrfam": "ipv4", 00:21:49.814 "trsvcid": "4420", 00:21:49.814 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:49.814 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:49.814 "hdgst": false, 00:21:49.814 "ddgst": false 00:21:49.814 }, 00:21:49.814 "method": "bdev_nvme_attach_controller" 00:21:49.814 },{ 00:21:49.814 "params": { 00:21:49.814 "name": "Nvme3", 00:21:49.814 "trtype": "tcp", 00:21:49.814 "traddr": "10.0.0.2", 00:21:49.814 "adrfam": "ipv4", 00:21:49.814 "trsvcid": "4420", 00:21:49.814 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:49.814 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:49.814 "hdgst": false, 00:21:49.814 "ddgst": false 00:21:49.814 }, 00:21:49.814 "method": "bdev_nvme_attach_controller" 00:21:49.814 },{ 00:21:49.814 "params": { 00:21:49.814 "name": "Nvme4", 00:21:49.814 "trtype": "tcp", 00:21:49.814 "traddr": "10.0.0.2", 00:21:49.814 "adrfam": "ipv4", 00:21:49.814 "trsvcid": "4420", 00:21:49.814 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:49.814 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:49.814 "hdgst": false, 00:21:49.814 "ddgst": false 00:21:49.814 }, 00:21:49.814 "method": "bdev_nvme_attach_controller" 00:21:49.814 },{ 00:21:49.814 "params": { 00:21:49.814 "name": "Nvme5", 00:21:49.814 "trtype": "tcp", 00:21:49.814 "traddr": "10.0.0.2", 00:21:49.814 "adrfam": "ipv4", 00:21:49.814 "trsvcid": "4420", 00:21:49.814 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:49.814 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:49.814 "hdgst": false, 00:21:49.814 "ddgst": false 00:21:49.814 }, 00:21:49.814 "method": "bdev_nvme_attach_controller" 00:21:49.814 },{ 00:21:49.814 "params": { 00:21:49.814 "name": "Nvme6", 00:21:49.814 "trtype": "tcp", 00:21:49.814 "traddr": "10.0.0.2", 00:21:49.814 "adrfam": "ipv4", 00:21:49.814 "trsvcid": "4420", 00:21:49.814 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:49.814 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:49.814 "hdgst": false, 00:21:49.814 "ddgst": false 00:21:49.814 }, 00:21:49.814 "method": "bdev_nvme_attach_controller" 00:21:49.814 },{ 00:21:49.814 "params": { 00:21:49.814 "name": "Nvme7", 00:21:49.814 "trtype": "tcp", 00:21:49.814 "traddr": "10.0.0.2", 00:21:49.814 "adrfam": "ipv4", 00:21:49.814 "trsvcid": "4420", 00:21:49.814 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:49.814 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:49.814 "hdgst": false, 00:21:49.814 "ddgst": false 00:21:49.814 }, 00:21:49.814 "method": "bdev_nvme_attach_controller" 00:21:49.814 },{ 00:21:49.814 "params": { 00:21:49.814 "name": "Nvme8", 00:21:49.814 "trtype": "tcp", 00:21:49.814 "traddr": "10.0.0.2", 00:21:49.814 "adrfam": "ipv4", 00:21:49.814 "trsvcid": "4420", 00:21:49.814 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:49.814 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:49.814 "hdgst": false, 00:21:49.814 "ddgst": false 00:21:49.814 }, 00:21:49.814 "method": "bdev_nvme_attach_controller" 00:21:49.814 },{ 00:21:49.814 "params": { 00:21:49.814 "name": "Nvme9", 00:21:49.814 "trtype": "tcp", 00:21:49.814 "traddr": "10.0.0.2", 00:21:49.814 "adrfam": "ipv4", 00:21:49.814 "trsvcid": "4420", 00:21:49.814 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:49.814 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:49.814 "hdgst": false, 00:21:49.814 "ddgst": false 00:21:49.814 }, 00:21:49.814 "method": "bdev_nvme_attach_controller" 00:21:49.814 },{ 00:21:49.814 "params": { 00:21:49.814 "name": "Nvme10", 00:21:49.814 "trtype": "tcp", 00:21:49.814 "traddr": "10.0.0.2", 00:21:49.814 "adrfam": "ipv4", 00:21:49.814 "trsvcid": "4420", 00:21:49.814 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:49.814 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:49.814 "hdgst": false, 00:21:49.814 "ddgst": false 00:21:49.814 }, 00:21:49.814 "method": "bdev_nvme_attach_controller" 00:21:49.814 }' 00:21:49.814 [2024-07-13 06:16:56.128389] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:49.814 [2024-07-13 06:16:56.128478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180803 ] 00:21:49.814 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.814 [2024-07-13 06:16:56.191447] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.814 [2024-07-13 06:16:56.299234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.715 Running I/O for 10 seconds... 00:21:52.293 06:16:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:52.293 06:16:58 -- common/autotest_common.sh@852 -- # return 0 00:21:52.293 06:16:58 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:52.293 06:16:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.293 06:16:58 -- common/autotest_common.sh@10 -- # set +x 00:21:52.293 06:16:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:52.293 06:16:58 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:52.293 06:16:58 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:52.293 06:16:58 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:52.293 06:16:58 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:52.293 06:16:58 -- target/shutdown.sh@57 -- # local ret=1 00:21:52.293 06:16:58 -- target/shutdown.sh@58 -- # local i 00:21:52.293 06:16:58 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:52.293 06:16:58 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:52.293 06:16:58 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:52.293 06:16:58 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:52.293 06:16:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.293 06:16:58 -- common/autotest_common.sh@10 -- # set +x 00:21:52.293 06:16:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:52.293 06:16:58 -- target/shutdown.sh@60 -- # read_io_count=167 00:21:52.293 06:16:58 -- target/shutdown.sh@63 -- # '[' 167 -ge 100 ']' 00:21:52.293 06:16:58 -- target/shutdown.sh@64 -- # ret=0 00:21:52.293 06:16:58 -- target/shutdown.sh@65 -- # break 00:21:52.293 06:16:58 -- target/shutdown.sh@69 -- # return 0 00:21:52.293 06:16:58 -- target/shutdown.sh@134 -- # killprocess 1180511 00:21:52.293 06:16:58 -- common/autotest_common.sh@926 -- # '[' -z 1180511 ']' 00:21:52.293 06:16:58 -- common/autotest_common.sh@930 -- # kill -0 1180511 00:21:52.293 06:16:58 -- common/autotest_common.sh@931 -- # uname 00:21:52.293 06:16:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:52.293 06:16:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1180511 00:21:52.293 06:16:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:52.293 06:16:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:52.293 06:16:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1180511' 00:21:52.293 killing process with pid 1180511 00:21:52.293 06:16:58 -- common/autotest_common.sh@945 -- # kill 1180511 00:21:52.293 06:16:58 -- common/autotest_common.sh@950 -- # wait 1180511 00:21:52.293 [2024-07-13 06:16:58.715874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.293 [2024-07-13 06:16:58.715972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.293 [2024-07-13 06:16:58.715988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.293 [2024-07-13 06:16:58.716001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.293 [2024-07-13 06:16:58.716013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.293 [2024-07-13 06:16:58.716026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.293 [2024-07-13 06:16:58.716039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.293 [2024-07-13 06:16:58.716052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.293 [2024-07-13 06:16:58.716064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.293 [2024-07-13 06:16:58.716077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.293 [2024-07-13 06:16:58.716090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.293 [2024-07-13 06:16:58.716102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.293 [2024-07-13 06:16:58.716115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.293 [2024-07-13 06:16:58.716127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.293 [2024-07-13 06:16:58.716150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.293 [2024-07-13 06:16:58.716163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.293 [2024-07-13 06:16:58.716176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.293 [2024-07-13 06:16:58.716189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.716697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87100 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.294 [2024-07-13 06:16:58.718925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.718938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.718950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.718962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.718975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.718988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.719000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.719014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.719027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.719040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.719053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.719071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.719083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.719096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.719109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.719122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.719141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.719153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.719174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.719196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.719217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.719215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.719260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd89a70 is same with the state(5) to be set 00:21:52.295 [2024-07-13 06:16:58.719294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.719982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.719997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.720014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.720028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.720044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.720058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.720074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.720088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.720104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.295 [2024-07-13 06:16:58.720119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.295 [2024-07-13 06:16:58.720135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.720996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.720994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.296 [2024-07-13 06:16:58.721010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.721018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.296 [2024-07-13 06:16:58.721027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.721033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.296 [2024-07-13 06:16:58.721041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.721051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.296 [2024-07-13 06:16:58.721057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.721064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.296 [2024-07-13 06:16:58.721072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.721077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.296 [2024-07-13 06:16:58.721089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33920 len:1[2024-07-13 06:16:58.721090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 he state(5) to be set 00:21:52.296 [2024-07-13 06:16:58.721106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with t[2024-07-13 06:16:58.721106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:52.296 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.721120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.296 [2024-07-13 06:16:58.721124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.721133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.296 [2024-07-13 06:16:58.721149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with t[2024-07-13 06:16:58.721149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:52.296 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.721163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.296 [2024-07-13 06:16:58.721167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.296 [2024-07-13 06:16:58.721176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.296 [2024-07-13 06:16:58.721182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.296 [2024-07-13 06:16:58.721189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34304 len:1[2024-07-13 06:16:58.721202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.297 he state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with t[2024-07-13 06:16:58.721215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:52.297 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.297 [2024-07-13 06:16:58.721229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.297 [2024-07-13 06:16:58.721242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.297 [2024-07-13 06:16:58.721258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.297 [2024-07-13 06:16:58.721271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.297 [2024-07-13 06:16:58.721284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with t[2024-07-13 06:16:58.721296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:1he state(5) to be set 00:21:52.297 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.297 [2024-07-13 06:16:58.721311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.297 [2024-07-13 06:16:58.721324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721433] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1be1f40 was disconnected and freed. reset controller. 00:21:52.297 [2024-07-13 06:16:58.721448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721661] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.721806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87590 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.723004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.297 [2024-07-13 06:16:58.723030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.297 [2024-07-13 06:16:58.723046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.297 [2024-07-13 06:16:58.723065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.297 [2024-07-13 06:16:58.723079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.297 [2024-07-13 06:16:58.723094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.297 [2024-07-13 06:16:58.723108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.297 [2024-07-13 06:16:58.723121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.297 [2024-07-13 06:16:58.723142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6a210 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.723227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.297 [2024-07-13 06:16:58.723248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.297 [2024-07-13 06:16:58.723264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.297 [2024-07-13 06:16:58.723277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.297 [2024-07-13 06:16:58.723291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.297 [2024-07-13 06:16:58.723305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.297 [2024-07-13 06:16:58.723319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.297 [2024-07-13 06:16:58.723332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.297 [2024-07-13 06:16:58.723345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa54f0 is same with the state(5) to be set 00:21:52.297 [2024-07-13 06:16:58.724575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.297 [2024-07-13 06:16:58.724600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.297 [2024-07-13 06:16:58.724621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.297 [2024-07-13 06:16:58.724638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.297 [2024-07-13 06:16:58.724654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.297 [2024-07-13 06:16:58.724670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.297 [2024-07-13 06:16:58.724685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.298 [2024-07-13 06:16:58.724700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.298 [2024-07-13 06:16:58.724718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.298 [2024-07-13 06:16:58.724734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.724755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.724771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.724787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.724802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.724824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.724839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.724856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.724878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.724896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.724911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.724927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.724942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.724958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.724972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.724989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.725004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.725020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.725035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.725052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.725066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.725083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.725097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.725113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.725138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.725154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.725172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.725191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.725206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.725222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.725237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.725253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.725267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.725284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.725299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.725316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.725330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.725352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.725367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.725383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.725398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.725414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.725429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.725445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.725460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.725476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.725491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.725508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.725522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.299 [2024-07-13 06:16:58.725538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.299 [2024-07-13 06:16:58.725553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 [2024-07-13 06:16:58.725573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.725588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 [2024-07-13 06:16:58.725604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.725619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 [2024-07-13 06:16:58.725635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.725650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 [2024-07-13 06:16:58.725667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.725682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 [2024-07-13 06:16:58.725697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28928 len:12[2024-07-13 06:16:58.725689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 he state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.725717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 [2024-07-13 06:16:58.725723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.725734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.725738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.725750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 06:16:58.725752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 he state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.725766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.725768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.725779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with t[2024-07-13 06:16:58.725784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 he state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.725799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with t[2024-07-13 06:16:58.725800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31360 len:12he state(5) to be set 00:21:52.300 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.725814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.725816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 [2024-07-13 06:16:58.725828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.725834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.725841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.725852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 06:16:58.725854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 he state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.725875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.725883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.725890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.725899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 [2024-07-13 06:16:58.725903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.725920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.725933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with t[2024-07-13 06:16:58.725936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:52.300 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 [2024-07-13 06:16:58.725950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.725954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.725964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.725969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 [2024-07-13 06:16:58.725978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.725986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.725991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.726002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 06:16:58.726004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 he state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.726018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.726021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.726031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.726036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 [2024-07-13 06:16:58.726044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.726053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32256 len:1[2024-07-13 06:16:58.726056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 he state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.726073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with t[2024-07-13 06:16:58.726073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:52.300 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 [2024-07-13 06:16:58.726087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.726092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.726104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.726108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 [2024-07-13 06:16:58.726118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.726125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.726142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with t[2024-07-13 06:16:58.726144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:52.300 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 [2024-07-13 06:16:58.726157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.726161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.726170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.726176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 [2024-07-13 06:16:58.726184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.726193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.726208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with t[2024-07-13 06:16:58.726208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:52.300 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 [2024-07-13 06:16:58.726228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:1[2024-07-13 06:16:58.726224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 he state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.726246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 06:16:58.726247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 he state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.726262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.726264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.726275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with t[2024-07-13 06:16:58.726280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:52.300 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 [2024-07-13 06:16:58.726314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.726317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.300 [2024-07-13 06:16:58.726329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.726334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.300 [2024-07-13 06:16:58.726343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.300 [2024-07-13 06:16:58.726350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.301 [2024-07-13 06:16:58.726356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.301 [2024-07-13 06:16:58.726370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with t[2024-07-13 06:16:58.726382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33408 len:1he state(5) to be set 00:21:52.301 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.301 [2024-07-13 06:16:58.726399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.301 [2024-07-13 06:16:58.726411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33536 len:12[2024-07-13 06:16:58.726425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.301 he state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with t[2024-07-13 06:16:58.726441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:21:52.301 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.301 [2024-07-13 06:16:58.726455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.301 [2024-07-13 06:16:58.726468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 06:16:58.726482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.301 he state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.301 [2024-07-13 06:16:58.726509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.301 [2024-07-13 06:16:58.726522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33920 len:1[2024-07-13 06:16:58.726535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.301 he state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.301 [2024-07-13 06:16:58.726563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.301 [2024-07-13 06:16:58.726578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.301 [2024-07-13 06:16:58.726591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.301 [2024-07-13 06:16:58.726604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.301 [2024-07-13 06:16:58.726617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with t[2024-07-13 06:16:58.726629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34304 len:1he state(5) to be set 00:21:52.301 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.301 [2024-07-13 06:16:58.726645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.301 [2024-07-13 06:16:58.726657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.301 [2024-07-13 06:16:58.726670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87a40 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.726678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.301 [2024-07-13 06:16:58.726695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.301 [2024-07-13 06:16:58.726709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.301 [2024-07-13 06:16:58.726726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.301 [2024-07-13 06:16:58.726743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.301 [2024-07-13 06:16:58.726848] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a59e70 was disconnected and freed. reset controller. 00:21:52.301 [2024-07-13 06:16:58.727512] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:52.301 [2024-07-13 06:16:58.727560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6a210 (9): Bad file descriptor 00:21:52.301 [2024-07-13 06:16:58.728803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.728835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.728850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.728863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.728886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.728899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.728919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.728931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.728943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.728955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.728967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.728979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.728991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.729003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.729015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.729028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.729040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.729052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.729064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.729076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.729088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.729100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.729112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.729125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.729149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.301 [2024-07-13 06:16:58.729164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729339] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:52.302 [2024-07-13 06:16:58.729354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with t[2024-07-13 06:16:58.729402] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c30a60 (9): he state(5) to be set 00:21:52.302 Bad file descriptor 00:21:52.302 [2024-07-13 06:16:58.729421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.729638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd87ed0 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.730991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.731003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.731015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.302 [2024-07-13 06:16:58.731032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731229] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88380 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.731318] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:52.303 [2024-07-13 06:16:58.732083] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:52.303 [2024-07-13 06:16:58.732674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732851] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.732994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.303 [2024-07-13 06:16:58.733668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.303 [2024-07-13 06:16:58.733695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.733712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.304 [2024-07-13 06:16:58.733726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.733741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.304 [2024-07-13 06:16:58.733754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.733768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.304 [2024-07-13 06:16:58.733791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.733806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6cf70 is same with the state(5) to be set 00:21:52.304 [2024-07-13 06:16:58.733853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.304 [2024-07-13 06:16:58.733880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.733896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.304 [2024-07-13 06:16:58.733921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.733940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.304 [2024-07-13 06:16:58.733955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.733969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.304 [2024-07-13 06:16:58.733982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.733995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cda50 is same with the state(5) to be set 00:21:52.304 [2024-07-13 06:16:58.734036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa54f0 (9): Bad file descriptor 00:21:52.304 [2024-07-13 06:16:58.734111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.304 [2024-07-13 06:16:58.734133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.734148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.304 [2024-07-13 06:16:58.734162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.734176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.304 [2024-07-13 06:16:58.734193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.734207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.304 [2024-07-13 06:16:58.734221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.734235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2b8c0 is same with the state(5) to be set 00:21:52.304 [2024-07-13 06:16:58.734330] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:52.304 [2024-07-13 06:16:58.735076] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:52.304 [2024-07-13 06:16:58.743711] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6cf70 (9): Bad file descriptor 00:21:52.304 [2024-07-13 06:16:58.743760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cda50 (9): Bad file descriptor 00:21:52.304 [2024-07-13 06:16:58.743832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2b8c0 (9): Bad file descriptor 00:21:52.304 [2024-07-13 06:16:58.744149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.304 [2024-07-13 06:16:58.744841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.304 [2024-07-13 06:16:58.744880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.744897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.744940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.744956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.744972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.744987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.745973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.745996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.746011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.746027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.746042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.746059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.746074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.746090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.746106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.746122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.746140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.746172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.305 [2024-07-13 06:16:58.746186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.305 [2024-07-13 06:16:58.746203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.746217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.746234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.746249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.746265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.746295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.746311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.746325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.746341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.746355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.746369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2c64b70 is same with the state(5) to be set 00:21:52.306 [2024-07-13 06:16:58.748089] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:52.306 [2024-07-13 06:16:58.753934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.753972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.753997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.754984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.754998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.755016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.306 [2024-07-13 06:16:58.755031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.306 [2024-07-13 06:16:58.755052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.755773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.307 [2024-07-13 06:16:58.755787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.307 [2024-07-13 06:16:58.758563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.758599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.758613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.758626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.758638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.758651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.758663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.758675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.758687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.758700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.758712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.758726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.758739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.758752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88810 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.307 [2024-07-13 06:16:58.759783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd88cc0 is same with the state(5) to be set 00:21:52.308 [2024-07-13 06:16:58.773109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.773194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.773215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.773232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.773248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.773264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.773281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.773310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.773328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.773343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.773360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.773385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.773403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.773417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.773433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be2260 is same with the state(5) to be set 00:21:52.308 [2024-07-13 06:16:58.774806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.774832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.774860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.774887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.774907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.774922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.774939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.774953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.774970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.774985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.308 [2024-07-13 06:16:58.775598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.308 [2024-07-13 06:16:58.775613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.775630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.775645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.775661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.775676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.775692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.775707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.775724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.775738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.775754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.775769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.775785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.775800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.775818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.775833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.775850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.775873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.775893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.775916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.775933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.775952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.775970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.775985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.309 [2024-07-13 06:16:58.776863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.309 [2024-07-13 06:16:58.776886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.776913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.776928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.776944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be3820 is same with the state(5) to be set 00:21:52.310 [2024-07-13 06:16:58.778159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.778981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.778996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.779012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.779027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.779043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.779058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.779074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.779089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.779105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.779121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.779138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.779153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.779169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.779188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.779207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.779222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.779239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.779253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.779270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.779285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.779302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.779317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.779333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.310 [2024-07-13 06:16:58.779348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.310 [2024-07-13 06:16:58.779365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.779971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.779988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.780007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.780024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.780040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.780057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.780072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.780089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.780104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.780120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.780135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.780151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.780167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.780184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.780199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.780215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.780231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.780246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba9c30 is same with the state(5) to be set 00:21:52.311 [2024-07-13 06:16:58.781481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.781505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.781528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.781545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.781563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.781578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.781596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.781611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.781627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.781648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.781666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.781682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.781699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.781714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.781730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.781745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.781762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.781777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.781794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.311 [2024-07-13 06:16:58.781809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.311 [2024-07-13 06:16:58.781825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.781840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.781857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.781879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.781896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.781912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.781929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.781945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.781961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.781976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.781993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.782963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.782979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.783001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.783017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.783034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.783049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.783066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.312 [2024-07-13 06:16:58.783082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.312 [2024-07-13 06:16:58.783098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.313 [2024-07-13 06:16:58.783114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.783130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.313 [2024-07-13 06:16:58.783145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.783162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.313 [2024-07-13 06:16:58.783177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.783194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.313 [2024-07-13 06:16:58.783210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.783226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.313 [2024-07-13 06:16:58.783241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.783258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.313 [2024-07-13 06:16:58.783274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.783291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.313 [2024-07-13 06:16:58.783310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.783328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.313 [2024-07-13 06:16:58.783344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.783361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.313 [2024-07-13 06:16:58.783377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.783393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.313 [2024-07-13 06:16:58.783409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.783426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.313 [2024-07-13 06:16:58.783442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.783460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.313 [2024-07-13 06:16:58.783475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.783492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.313 [2024-07-13 06:16:58.783507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.783526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.313 [2024-07-13 06:16:58.783542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.783560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.313 [2024-07-13 06:16:58.783576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.783593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.313 [2024-07-13 06:16:58.783609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.783625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bab210 is same with the state(5) to be set 00:21:52.313 [2024-07-13 06:16:58.783701] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bab210 was disconnected and freed. reset controller. 00:21:52.313 [2024-07-13 06:16:58.783786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.313 [2024-07-13 06:16:58.783808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.783831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:52.313 [2024-07-13 06:16:58.783848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.783876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bac7f0 is same with the state(5) to be set 00:21:52.313 [2024-07-13 06:16:58.783970] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bac7f0 was disconnected and freed. reset controller. 00:21:52.313 [2024-07-13 06:16:58.784001] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:52.313 [2024-07-13 06:16:58.784027] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:52.313 [2024-07-13 06:16:58.784046] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:52.313 [2024-07-13 06:16:58.784143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:52.313 [2024-07-13 06:16:58.784166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.784182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aa54f0 is same with the state(5) to be set 00:21:52.313 [2024-07-13 06:16:58.784220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:52.313 [2024-07-13 06:16:58.784240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.784254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c30a60 is same with the state(5) to be set 00:21:52.313 [2024-07-13 06:16:58.784290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:52.313 [2024-07-13 06:16:58.784311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.784334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6a210 is same with the state(5) to be set 00:21:52.313 [2024-07-13 06:16:58.784407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa54f0 (9): Bad file descriptor 00:21:52.313 [2024-07-13 06:16:58.784442] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6a210 (9): Bad file descriptor 00:21:52.313 [2024-07-13 06:16:58.784469] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c30a60 (9): Bad file descriptor 00:21:52.313 [2024-07-13 06:16:58.784518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.313 [2024-07-13 06:16:58.784538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.784555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.313 [2024-07-13 06:16:58.784570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.313 [2024-07-13 06:16:58.784585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.314 [2024-07-13 06:16:58.784600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.314 [2024-07-13 06:16:58.784615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.314 [2024-07-13 06:16:58.784630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.314 [2024-07-13 06:16:58.784644] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63a30 is same with the state(5) to be set 00:21:52.314 [2024-07-13 06:16:58.784694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.314 [2024-07-13 06:16:58.784725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.314 [2024-07-13 06:16:58.784753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.314 [2024-07-13 06:16:58.784778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.314 [2024-07-13 06:16:58.784801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.314 [2024-07-13 06:16:58.784817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.314 [2024-07-13 06:16:58.784832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.314 [2024-07-13 06:16:58.784847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.314 [2024-07-13 06:16:58.784861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e60 is same with the state(5) to be set 00:21:52.314 [2024-07-13 06:16:58.784911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.314 [2024-07-13 06:16:58.784932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.314 [2024-07-13 06:16:58.784947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.314 [2024-07-13 06:16:58.784961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.314 [2024-07-13 06:16:58.784976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.314 [2024-07-13 06:16:58.784990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.314 [2024-07-13 06:16:58.785004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.314 [2024-07-13 06:16:58.785018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.314 [2024-07-13 06:16:58.785032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bee830 is same with the state(5) to be set 00:21:52.314 [2024-07-13 06:16:58.785080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.314 [2024-07-13 06:16:58.785102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.314 [2024-07-13 06:16:58.785118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.314 [2024-07-13 06:16:58.785133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.314 [2024-07-13 06:16:58.785148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.314 [2024-07-13 06:16:58.785162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.314 [2024-07-13 06:16:58.785177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.314 [2024-07-13 06:16:58.785194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.314 [2024-07-13 06:16:58.785207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2b490 is same with the state(5) to be set 00:21:52.314 [2024-07-13 06:16:58.787220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:52.314 [2024-07-13 06:16:58.787256] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:52.573 task offset: 29184 on job bdev=Nvme1n1 fails 00:21:52.573 00:21:52.573 Latency(us) 00:21:52.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.573 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:52.573 Job: Nvme1n1 ended in about 0.59 seconds with error 00:21:52.573 Verification LBA range: start 0x0 length 0x400 00:21:52.573 Nvme1n1 : 0.59 350.39 21.90 107.81 0.00 138518.69 55924.05 117285.17 00:21:52.573 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:52.573 Job: Nvme2n1 ended in about 0.60 seconds with error 00:21:52.573 Verification LBA range: start 0x0 length 0x400 00:21:52.573 Nvme2n1 : 0.60 347.95 21.75 107.06 0.00 137765.88 20680.25 148354.09 00:21:52.573 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:52.573 Job: Nvme3n1 ended in about 0.64 seconds with error 00:21:52.573 Verification LBA range: start 0x0 length 0x400 00:21:52.573 Nvme3n1 : 0.64 323.20 20.20 99.45 0.00 146707.83 95536.92 115731.72 00:21:52.573 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:52.573 Job: Nvme4n1 ended in about 0.65 seconds with error 00:21:52.573 Verification LBA range: start 0x0 length 0x400 00:21:52.573 Nvme4n1 : 0.65 321.48 20.09 98.92 0.00 145769.05 93983.48 114178.28 00:21:52.573 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:52.573 Job: Nvme5n1 ended in about 0.65 seconds with error 00:21:52.573 Verification LBA range: start 0x0 length 0x400 00:21:52.573 Nvme5n1 : 0.65 319.85 19.99 98.42 0.00 144833.85 81167.55 113401.55 00:21:52.573 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:52.573 Job: Nvme6n1 ended in about 0.66 seconds with error 00:21:52.573 Verification LBA range: start 0x0 length 0x400 00:21:52.573 Nvme6n1 : 0.66 317.40 19.84 97.66 0.00 144240.60 62526.20 126605.84 00:21:52.573 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:52.573 Job: Nvme7n1 ended in about 0.66 seconds with error 00:21:52.573 Verification LBA range: start 0x0 length 0x400 00:21:52.573 Nvme7n1 : 0.66 411.49 25.72 3.05 0.00 133581.09 9320.68 111071.38 00:21:52.573 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:52.573 Verification LBA range: start 0x0 length 0x400 00:21:52.573 Nvme8n1 : 0.61 373.28 23.33 0.00 0.00 154015.99 7184.69 121168.78 00:21:52.573 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:52.573 Verification LBA range: start 0x0 length 0x400 00:21:52.573 Nvme9n1 : 0.62 368.59 23.04 0.00 0.00 153291.54 11893.57 123498.95 00:21:52.573 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:52.573 Job: Nvme10n1 ended in about 0.62 seconds with error 00:21:52.573 Verification LBA range: start 0x0 length 0x400 00:21:52.573 Nvme10n1 : 0.62 266.02 16.63 103.81 0.00 152981.21 98255.45 121945.51 00:21:52.573 =================================================================================================================== 00:21:52.573 Total : 3399.65 212.48 716.18 0.00 144749.32 7184.69 148354.09 00:21:52.573 [2024-07-13 06:16:58.815380] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:52.573 [2024-07-13 06:16:58.815471] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a63e60 (9): Bad file descriptor 00:21:52.573 [2024-07-13 06:16:58.815508] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2b490 (9): Bad file descriptor 00:21:52.573 [2024-07-13 06:16:58.815707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.573 [2024-07-13 06:16:58.815879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.573 [2024-07-13 06:16:58.815908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6cf70 with addr=10.0.0.2, port=4420 00:21:52.573 [2024-07-13 06:16:58.815928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6cf70 is same with the state(5) to be set 00:21:52.573 [2024-07-13 06:16:58.816055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.573 [2024-07-13 06:16:58.816178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.573 [2024-07-13 06:16:58.816204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19cda50 with addr=10.0.0.2, port=4420 00:21:52.573 [2024-07-13 06:16:58.816221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19cda50 is same with the state(5) to be set 00:21:52.573 [2024-07-13 06:16:58.816344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.573 [2024-07-13 06:16:58.816459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.573 [2024-07-13 06:16:58.816486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2b8c0 with addr=10.0.0.2, port=4420 00:21:52.573 [2024-07-13 06:16:58.816502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2b8c0 is same with the state(5) to be set 00:21:52.573 [2024-07-13 06:16:58.817490] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6cf70 (9): Bad file descriptor 00:21:52.573 [2024-07-13 06:16:58.817522] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19cda50 (9): Bad file descriptor 00:21:52.573 [2024-07-13 06:16:58.817543] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2b8c0 (9): Bad file descriptor 00:21:52.573 [2024-07-13 06:16:58.817560] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:52.573 [2024-07-13 06:16:58.817582] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:52.573 [2024-07-13 06:16:58.817598] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:52.573 [2024-07-13 06:16:58.817619] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:52.574 [2024-07-13 06:16:58.817635] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:52.574 [2024-07-13 06:16:58.817648] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:52.574 [2024-07-13 06:16:58.817667] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:52.574 [2024-07-13 06:16:58.817681] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:52.574 [2024-07-13 06:16:58.817695] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:52.574 [2024-07-13 06:16:58.817743] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.574 [2024-07-13 06:16:58.817771] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.574 [2024-07-13 06:16:58.817791] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.574 [2024-07-13 06:16:58.817813] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a63a30 (9): Bad file descriptor 00:21:52.574 [2024-07-13 06:16:58.817847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bee830 (9): Bad file descriptor 00:21:52.574 [2024-07-13 06:16:58.817892] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.574 [2024-07-13 06:16:58.817915] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.574 [2024-07-13 06:16:58.817941] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:52.574 [2024-07-13 06:16:58.818546] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.574 [2024-07-13 06:16:58.818571] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.574 [2024-07-13 06:16:58.818585] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.574 [2024-07-13 06:16:58.818727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.574 [2024-07-13 06:16:58.818864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.574 [2024-07-13 06:16:58.818920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2b490 with addr=10.0.0.2, port=4420 00:21:52.574 [2024-07-13 06:16:58.818936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2b490 is same with the state(5) to be set 00:21:52.574 [2024-07-13 06:16:58.819051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.574 [2024-07-13 06:16:58.819184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.574 [2024-07-13 06:16:58.819210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a63e60 with addr=10.0.0.2, port=4420 00:21:52.574 [2024-07-13 06:16:58.819227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a63e60 is same with the state(5) to be set 00:21:52.574 [2024-07-13 06:16:58.819242] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:52.574 [2024-07-13 06:16:58.819256] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:52.574 [2024-07-13 06:16:58.819269] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:52.574 [2024-07-13 06:16:58.819289] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:52.574 [2024-07-13 06:16:58.819304] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:52.574 [2024-07-13 06:16:58.819329] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:52.574 [2024-07-13 06:16:58.819346] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:52.574 [2024-07-13 06:16:58.819361] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:52.574 [2024-07-13 06:16:58.819374] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:52.574 [2024-07-13 06:16:58.819462] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.574 [2024-07-13 06:16:58.819484] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.574 [2024-07-13 06:16:58.819496] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.574 [2024-07-13 06:16:58.819513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2b490 (9): Bad file descriptor 00:21:52.574 [2024-07-13 06:16:58.819533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a63e60 (9): Bad file descriptor 00:21:52.574 [2024-07-13 06:16:58.819594] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:52.574 [2024-07-13 06:16:58.819615] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:52.574 [2024-07-13 06:16:58.819629] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:52.574 [2024-07-13 06:16:58.819646] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:52.574 [2024-07-13 06:16:58.819661] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:52.574 [2024-07-13 06:16:58.819680] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:52.574 [2024-07-13 06:16:58.819719] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.574 [2024-07-13 06:16:58.819736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.832 06:16:59 -- target/shutdown.sh@135 -- # nvmfpid= 00:21:52.832 06:16:59 -- target/shutdown.sh@138 -- # sleep 1 00:21:54.212 06:17:00 -- target/shutdown.sh@141 -- # kill -9 1180803 00:21:54.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (1180803) - No such process 00:21:54.212 06:17:00 -- target/shutdown.sh@141 -- # true 00:21:54.212 06:17:00 -- target/shutdown.sh@143 -- # stoptarget 00:21:54.212 06:17:00 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:54.212 06:17:00 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:54.212 06:17:00 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:54.212 06:17:00 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:54.212 06:17:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:54.212 06:17:00 -- nvmf/common.sh@116 -- # sync 00:21:54.212 06:17:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:54.212 06:17:00 -- nvmf/common.sh@119 -- # set +e 00:21:54.212 06:17:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:54.212 06:17:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:54.212 rmmod nvme_tcp 00:21:54.212 rmmod nvme_fabrics 00:21:54.212 rmmod nvme_keyring 00:21:54.212 06:17:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:54.212 06:17:00 -- nvmf/common.sh@123 -- # set -e 00:21:54.212 06:17:00 -- nvmf/common.sh@124 -- # return 0 00:21:54.212 06:17:00 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:21:54.212 06:17:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:54.212 06:17:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:54.212 06:17:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:54.212 06:17:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:54.212 06:17:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:54.212 06:17:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.212 06:17:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.212 06:17:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.120 06:17:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:56.120 00:21:56.120 real 0m8.061s 00:21:56.120 user 0m20.825s 00:21:56.120 sys 0m1.503s 00:21:56.120 06:17:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:56.120 06:17:02 -- common/autotest_common.sh@10 -- # set +x 00:21:56.120 ************************************ 00:21:56.120 END TEST nvmf_shutdown_tc3 00:21:56.120 ************************************ 00:21:56.120 06:17:02 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:21:56.120 00:21:56.120 real 0m29.632s 00:21:56.120 user 1m26.947s 00:21:56.120 sys 0m6.556s 00:21:56.120 06:17:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:56.120 06:17:02 -- common/autotest_common.sh@10 -- # set +x 00:21:56.120 ************************************ 00:21:56.120 END TEST nvmf_shutdown 00:21:56.120 ************************************ 00:21:56.120 06:17:02 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:21:56.120 06:17:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:56.120 06:17:02 -- common/autotest_common.sh@10 -- # set +x 00:21:56.120 06:17:02 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:21:56.120 06:17:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:56.120 06:17:02 -- common/autotest_common.sh@10 -- # set +x 00:21:56.120 06:17:02 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:21:56.120 06:17:02 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:56.120 06:17:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:56.120 06:17:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:56.120 06:17:02 -- common/autotest_common.sh@10 -- # set +x 00:21:56.120 ************************************ 00:21:56.120 START TEST nvmf_multicontroller 00:21:56.120 ************************************ 00:21:56.120 06:17:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:56.120 * Looking for test storage... 00:21:56.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:56.120 06:17:02 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.120 06:17:02 -- nvmf/common.sh@7 -- # uname -s 00:21:56.120 06:17:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.120 06:17:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.120 06:17:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.120 06:17:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.120 06:17:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.120 06:17:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.120 06:17:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.120 06:17:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.120 06:17:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.120 06:17:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.120 06:17:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.120 06:17:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.120 06:17:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.120 06:17:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.120 06:17:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.120 06:17:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.120 06:17:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.120 06:17:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.120 06:17:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.120 06:17:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.120 06:17:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.120 06:17:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.120 06:17:02 -- paths/export.sh@5 -- # export PATH 00:21:56.120 06:17:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.120 06:17:02 -- nvmf/common.sh@46 -- # : 0 00:21:56.120 06:17:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:56.120 06:17:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:56.120 06:17:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:56.120 06:17:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.120 06:17:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.120 06:17:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:56.120 06:17:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:56.120 06:17:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:56.120 06:17:02 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:56.120 06:17:02 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:56.120 06:17:02 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:56.120 06:17:02 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:56.120 06:17:02 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:56.120 06:17:02 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:56.120 06:17:02 -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:56.120 06:17:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:56.120 06:17:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.120 06:17:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:56.120 06:17:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:56.120 06:17:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:56.120 06:17:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.120 06:17:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:56.120 06:17:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.120 06:17:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:56.120 06:17:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:56.120 06:17:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:56.120 06:17:02 -- common/autotest_common.sh@10 -- # set +x 00:21:58.022 06:17:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:58.022 06:17:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:58.022 06:17:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:58.022 06:17:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:58.022 06:17:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:58.022 06:17:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:58.022 06:17:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:58.022 06:17:04 -- nvmf/common.sh@294 -- # net_devs=() 00:21:58.022 06:17:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:58.022 06:17:04 -- nvmf/common.sh@295 -- # e810=() 00:21:58.022 06:17:04 -- nvmf/common.sh@295 -- # local -ga e810 00:21:58.022 06:17:04 -- nvmf/common.sh@296 -- # x722=() 00:21:58.022 06:17:04 -- nvmf/common.sh@296 -- # local -ga x722 00:21:58.022 06:17:04 -- nvmf/common.sh@297 -- # mlx=() 00:21:58.022 06:17:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:58.022 06:17:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.022 06:17:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.022 06:17:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.022 06:17:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.022 06:17:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.022 06:17:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.022 06:17:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.022 06:17:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.022 06:17:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.022 06:17:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.022 06:17:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.022 06:17:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:58.022 06:17:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:58.022 06:17:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:58.022 06:17:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:58.022 06:17:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:58.022 06:17:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:58.022 06:17:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:58.022 06:17:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:58.022 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:58.022 06:17:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:58.022 06:17:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:58.022 06:17:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.022 06:17:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.022 06:17:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:58.022 06:17:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:58.022 06:17:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:58.022 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:58.022 06:17:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:58.022 06:17:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:58.022 06:17:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.022 06:17:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.022 06:17:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:58.022 06:17:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:58.022 06:17:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:58.022 06:17:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:58.022 06:17:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:58.022 06:17:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.022 06:17:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:58.022 06:17:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.022 06:17:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:58.022 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:58.022 06:17:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.022 06:17:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:58.022 06:17:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.022 06:17:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:58.022 06:17:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.022 06:17:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:58.022 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:58.022 06:17:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.022 06:17:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:58.022 06:17:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:58.022 06:17:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:58.022 06:17:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:58.022 06:17:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:58.022 06:17:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.022 06:17:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.022 06:17:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.022 06:17:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:58.022 06:17:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.022 06:17:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.022 06:17:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:58.022 06:17:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.022 06:17:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.022 06:17:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:58.022 06:17:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:58.022 06:17:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.022 06:17:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.311 06:17:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.311 06:17:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.311 06:17:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:58.311 06:17:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.311 06:17:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.311 06:17:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.311 06:17:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:58.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:21:58.311 00:21:58.311 --- 10.0.0.2 ping statistics --- 00:21:58.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.311 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:21:58.311 06:17:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:21:58.311 00:21:58.311 --- 10.0.0.1 ping statistics --- 00:21:58.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.311 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:21:58.311 06:17:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.311 06:17:04 -- nvmf/common.sh@410 -- # return 0 00:21:58.311 06:17:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:58.311 06:17:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.311 06:17:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:58.311 06:17:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:58.311 06:17:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.311 06:17:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:58.311 06:17:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:58.311 06:17:04 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:58.312 06:17:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:58.312 06:17:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:58.312 06:17:04 -- common/autotest_common.sh@10 -- # set +x 00:21:58.312 06:17:04 -- nvmf/common.sh@469 -- # nvmfpid=1183226 00:21:58.312 06:17:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:58.312 06:17:04 -- nvmf/common.sh@470 -- # waitforlisten 1183226 00:21:58.312 06:17:04 -- common/autotest_common.sh@819 -- # '[' -z 1183226 ']' 00:21:58.312 06:17:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.312 06:17:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:58.312 06:17:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.312 06:17:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:58.312 06:17:04 -- common/autotest_common.sh@10 -- # set +x 00:21:58.312 [2024-07-13 06:17:04.689027] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:58.312 [2024-07-13 06:17:04.689095] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.312 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.312 [2024-07-13 06:17:04.756918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:58.574 [2024-07-13 06:17:04.875542] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:58.574 [2024-07-13 06:17:04.875713] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.574 [2024-07-13 06:17:04.875733] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.574 [2024-07-13 06:17:04.875748] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.574 [2024-07-13 06:17:04.875855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.574 [2024-07-13 06:17:04.875955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.574 [2024-07-13 06:17:04.875960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.512 06:17:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:59.512 06:17:05 -- common/autotest_common.sh@852 -- # return 0 00:21:59.512 06:17:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:59.512 06:17:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:59.512 06:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:59.512 06:17:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:59.512 06:17:05 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:59.512 06:17:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.512 06:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:59.512 [2024-07-13 06:17:05.688322] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.512 06:17:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.512 06:17:05 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:59.512 06:17:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.512 06:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:59.512 Malloc0 00:21:59.512 06:17:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.512 06:17:05 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:59.512 06:17:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.512 06:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:59.512 06:17:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.512 06:17:05 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:59.512 06:17:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.512 06:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:59.512 06:17:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.512 06:17:05 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:59.512 06:17:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.512 06:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:59.512 [2024-07-13 06:17:05.747836] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.512 06:17:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.512 06:17:05 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:59.512 06:17:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.512 06:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:59.512 [2024-07-13 06:17:05.755754] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:59.512 06:17:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.512 06:17:05 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:59.512 06:17:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.512 06:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:59.512 Malloc1 00:21:59.512 06:17:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.512 06:17:05 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:59.512 06:17:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.512 06:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:59.512 06:17:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.512 06:17:05 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:59.512 06:17:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.512 06:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:59.512 06:17:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.512 06:17:05 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:59.512 06:17:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.512 06:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:59.512 06:17:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.512 06:17:05 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:59.512 06:17:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.512 06:17:05 -- common/autotest_common.sh@10 -- # set +x 00:21:59.512 06:17:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.512 06:17:05 -- host/multicontroller.sh@44 -- # bdevperf_pid=1183385 00:21:59.512 06:17:05 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:59.512 06:17:05 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:59.512 06:17:05 -- host/multicontroller.sh@47 -- # waitforlisten 1183385 /var/tmp/bdevperf.sock 00:21:59.512 06:17:05 -- common/autotest_common.sh@819 -- # '[' -z 1183385 ']' 00:21:59.512 06:17:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.512 06:17:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:59.512 06:17:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.512 06:17:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:59.512 06:17:05 -- common/autotest_common.sh@10 -- # set +x 00:22:00.446 06:17:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:00.446 06:17:06 -- common/autotest_common.sh@852 -- # return 0 00:22:00.446 06:17:06 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:00.446 06:17:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.446 06:17:06 -- common/autotest_common.sh@10 -- # set +x 00:22:00.706 NVMe0n1 00:22:00.706 06:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.706 06:17:07 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:00.706 06:17:07 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:00.706 06:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.706 06:17:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.706 06:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.706 1 00:22:00.706 06:17:07 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:00.706 06:17:07 -- common/autotest_common.sh@640 -- # local es=0 00:22:00.706 06:17:07 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:00.706 06:17:07 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:00.706 06:17:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:00.706 06:17:07 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:00.706 06:17:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:00.706 06:17:07 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:00.706 06:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.706 06:17:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.706 request: 00:22:00.706 { 00:22:00.706 "name": "NVMe0", 00:22:00.706 "trtype": "tcp", 00:22:00.706 "traddr": "10.0.0.2", 00:22:00.706 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:00.706 "hostaddr": "10.0.0.2", 00:22:00.706 "hostsvcid": "60000", 00:22:00.706 "adrfam": "ipv4", 00:22:00.706 "trsvcid": "4420", 00:22:00.706 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.706 "method": "bdev_nvme_attach_controller", 00:22:00.706 "req_id": 1 00:22:00.706 } 00:22:00.706 Got JSON-RPC error response 00:22:00.706 response: 00:22:00.706 { 00:22:00.706 "code": -114, 00:22:00.706 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:00.706 } 00:22:00.706 06:17:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:00.706 06:17:07 -- common/autotest_common.sh@643 -- # es=1 00:22:00.706 06:17:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:00.706 06:17:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:00.706 06:17:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:00.706 06:17:07 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:00.706 06:17:07 -- common/autotest_common.sh@640 -- # local es=0 00:22:00.706 06:17:07 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:00.706 06:17:07 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:00.706 06:17:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:00.706 06:17:07 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:00.706 06:17:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:00.706 06:17:07 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:00.706 06:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.706 06:17:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.706 request: 00:22:00.706 { 00:22:00.706 "name": "NVMe0", 00:22:00.706 "trtype": "tcp", 00:22:00.706 "traddr": "10.0.0.2", 00:22:00.706 "hostaddr": "10.0.0.2", 00:22:00.706 "hostsvcid": "60000", 00:22:00.706 "adrfam": "ipv4", 00:22:00.706 "trsvcid": "4420", 00:22:00.706 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:00.706 "method": "bdev_nvme_attach_controller", 00:22:00.706 "req_id": 1 00:22:00.706 } 00:22:00.706 Got JSON-RPC error response 00:22:00.706 response: 00:22:00.706 { 00:22:00.706 "code": -114, 00:22:00.706 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:00.706 } 00:22:00.706 06:17:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:00.706 06:17:07 -- common/autotest_common.sh@643 -- # es=1 00:22:00.706 06:17:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:00.706 06:17:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:00.706 06:17:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:00.706 06:17:07 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:00.706 06:17:07 -- common/autotest_common.sh@640 -- # local es=0 00:22:00.706 06:17:07 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:00.706 06:17:07 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:00.706 06:17:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:00.706 06:17:07 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:00.706 06:17:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:00.706 06:17:07 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:00.707 06:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.707 06:17:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.707 request: 00:22:00.707 { 00:22:00.707 "name": "NVMe0", 00:22:00.707 "trtype": "tcp", 00:22:00.707 "traddr": "10.0.0.2", 00:22:00.707 "hostaddr": "10.0.0.2", 00:22:00.707 "hostsvcid": "60000", 00:22:00.707 "adrfam": "ipv4", 00:22:00.707 "trsvcid": "4420", 00:22:00.707 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.707 "multipath": "disable", 00:22:00.707 "method": "bdev_nvme_attach_controller", 00:22:00.707 "req_id": 1 00:22:00.707 } 00:22:00.707 Got JSON-RPC error response 00:22:00.707 response: 00:22:00.707 { 00:22:00.707 "code": -114, 00:22:00.707 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:00.707 } 00:22:00.707 06:17:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:00.707 06:17:07 -- common/autotest_common.sh@643 -- # es=1 00:22:00.707 06:17:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:00.707 06:17:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:00.707 06:17:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:00.707 06:17:07 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:00.707 06:17:07 -- common/autotest_common.sh@640 -- # local es=0 00:22:00.707 06:17:07 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:00.707 06:17:07 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:00.707 06:17:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:00.707 06:17:07 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:00.707 06:17:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:00.707 06:17:07 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:00.707 06:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.707 06:17:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.707 request: 00:22:00.707 { 00:22:00.707 "name": "NVMe0", 00:22:00.707 "trtype": "tcp", 00:22:00.707 "traddr": "10.0.0.2", 00:22:00.707 "hostaddr": "10.0.0.2", 00:22:00.707 "hostsvcid": "60000", 00:22:00.707 "adrfam": "ipv4", 00:22:00.707 "trsvcid": "4420", 00:22:00.707 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.707 "multipath": "failover", 00:22:00.707 "method": "bdev_nvme_attach_controller", 00:22:00.707 "req_id": 1 00:22:00.707 } 00:22:00.707 Got JSON-RPC error response 00:22:00.707 response: 00:22:00.707 { 00:22:00.707 "code": -114, 00:22:00.707 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:00.707 } 00:22:00.707 06:17:07 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:00.707 06:17:07 -- common/autotest_common.sh@643 -- # es=1 00:22:00.707 06:17:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:00.707 06:17:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:00.707 06:17:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:00.707 06:17:07 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:00.707 06:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.707 06:17:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.707 00:22:00.707 06:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.707 06:17:07 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:00.707 06:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.707 06:17:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.707 06:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.707 06:17:07 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:00.707 06:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.707 06:17:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.965 00:22:00.965 06:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.965 06:17:07 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:00.965 06:17:07 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:00.965 06:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.965 06:17:07 -- common/autotest_common.sh@10 -- # set +x 00:22:00.965 06:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.965 06:17:07 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:00.965 06:17:07 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:02.340 0 00:22:02.340 06:17:08 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:02.340 06:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.340 06:17:08 -- common/autotest_common.sh@10 -- # set +x 00:22:02.340 06:17:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.340 06:17:08 -- host/multicontroller.sh@100 -- # killprocess 1183385 00:22:02.340 06:17:08 -- common/autotest_common.sh@926 -- # '[' -z 1183385 ']' 00:22:02.340 06:17:08 -- common/autotest_common.sh@930 -- # kill -0 1183385 00:22:02.340 06:17:08 -- common/autotest_common.sh@931 -- # uname 00:22:02.340 06:17:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:02.340 06:17:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1183385 00:22:02.340 06:17:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:02.340 06:17:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:02.340 06:17:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1183385' 00:22:02.340 killing process with pid 1183385 00:22:02.340 06:17:08 -- common/autotest_common.sh@945 -- # kill 1183385 00:22:02.340 06:17:08 -- common/autotest_common.sh@950 -- # wait 1183385 00:22:02.599 06:17:08 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:02.599 06:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.599 06:17:08 -- common/autotest_common.sh@10 -- # set +x 00:22:02.599 06:17:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.599 06:17:08 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:02.599 06:17:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.599 06:17:08 -- common/autotest_common.sh@10 -- # set +x 00:22:02.599 06:17:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.599 06:17:08 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:02.599 06:17:08 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:02.599 06:17:08 -- common/autotest_common.sh@1597 -- # read -r file 00:22:02.599 06:17:08 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:02.599 06:17:08 -- common/autotest_common.sh@1596 -- # sort -u 00:22:02.599 06:17:08 -- common/autotest_common.sh@1598 -- # cat 00:22:02.599 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:02.599 [2024-07-13 06:17:05.850416] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:02.599 [2024-07-13 06:17:05.850515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183385 ] 00:22:02.599 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.599 [2024-07-13 06:17:05.916041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.599 [2024-07-13 06:17:06.024287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.599 [2024-07-13 06:17:07.428494] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 40110b67-cbb9-4e8d-9261-6952b4545f8b already exists 00:22:02.599 [2024-07-13 06:17:07.428536] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:40110b67-cbb9-4e8d-9261-6952b4545f8b alias for bdev NVMe1n1 00:22:02.599 [2024-07-13 06:17:07.428554] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:02.599 Running I/O for 1 seconds... 00:22:02.599 00:22:02.599 Latency(us) 00:22:02.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.599 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:02.599 NVMe0n1 : 1.01 18593.47 72.63 0.00 0.00 6865.87 4004.98 13689.74 00:22:02.599 =================================================================================================================== 00:22:02.599 Total : 18593.47 72.63 0.00 0.00 6865.87 4004.98 13689.74 00:22:02.599 Received shutdown signal, test time was about 1.000000 seconds 00:22:02.599 00:22:02.599 Latency(us) 00:22:02.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.599 =================================================================================================================== 00:22:02.599 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.599 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:02.599 06:17:08 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:02.599 06:17:08 -- common/autotest_common.sh@1597 -- # read -r file 00:22:02.599 06:17:08 -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:02.599 06:17:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:02.599 06:17:08 -- nvmf/common.sh@116 -- # sync 00:22:02.599 06:17:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:02.599 06:17:08 -- nvmf/common.sh@119 -- # set +e 00:22:02.599 06:17:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:02.599 06:17:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:02.599 rmmod nvme_tcp 00:22:02.599 rmmod nvme_fabrics 00:22:02.599 rmmod nvme_keyring 00:22:02.599 06:17:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:02.599 06:17:08 -- nvmf/common.sh@123 -- # set -e 00:22:02.599 06:17:08 -- nvmf/common.sh@124 -- # return 0 00:22:02.599 06:17:08 -- nvmf/common.sh@477 -- # '[' -n 1183226 ']' 00:22:02.599 06:17:08 -- nvmf/common.sh@478 -- # killprocess 1183226 00:22:02.599 06:17:08 -- common/autotest_common.sh@926 -- # '[' -z 1183226 ']' 00:22:02.599 06:17:08 -- common/autotest_common.sh@930 -- # kill -0 1183226 00:22:02.599 06:17:08 -- common/autotest_common.sh@931 -- # uname 00:22:02.599 06:17:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:02.599 06:17:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1183226 00:22:02.599 06:17:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:02.599 06:17:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:02.599 06:17:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1183226' 00:22:02.599 killing process with pid 1183226 00:22:02.599 06:17:08 -- common/autotest_common.sh@945 -- # kill 1183226 00:22:02.599 06:17:08 -- common/autotest_common.sh@950 -- # wait 1183226 00:22:02.857 06:17:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:02.857 06:17:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:02.857 06:17:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:02.857 06:17:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:02.857 06:17:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:02.857 06:17:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.857 06:17:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.857 06:17:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.391 06:17:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:05.391 00:22:05.391 real 0m8.849s 00:22:05.391 user 0m17.187s 00:22:05.391 sys 0m2.345s 00:22:05.391 06:17:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:05.391 06:17:11 -- common/autotest_common.sh@10 -- # set +x 00:22:05.391 ************************************ 00:22:05.391 END TEST nvmf_multicontroller 00:22:05.391 ************************************ 00:22:05.391 06:17:11 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:05.391 06:17:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:05.391 06:17:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:05.391 06:17:11 -- common/autotest_common.sh@10 -- # set +x 00:22:05.391 ************************************ 00:22:05.391 START TEST nvmf_aer 00:22:05.391 ************************************ 00:22:05.391 06:17:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:05.391 * Looking for test storage... 00:22:05.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:05.391 06:17:11 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.391 06:17:11 -- nvmf/common.sh@7 -- # uname -s 00:22:05.391 06:17:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.391 06:17:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.391 06:17:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.391 06:17:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.391 06:17:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.391 06:17:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.391 06:17:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.391 06:17:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.391 06:17:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.391 06:17:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.391 06:17:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.391 06:17:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.391 06:17:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.391 06:17:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.391 06:17:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.391 06:17:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.391 06:17:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.391 06:17:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.391 06:17:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.391 06:17:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.391 06:17:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.391 06:17:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.391 06:17:11 -- paths/export.sh@5 -- # export PATH 00:22:05.391 06:17:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.391 06:17:11 -- nvmf/common.sh@46 -- # : 0 00:22:05.391 06:17:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:05.391 06:17:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:05.392 06:17:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:05.392 06:17:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.392 06:17:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.392 06:17:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:05.392 06:17:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:05.392 06:17:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:05.392 06:17:11 -- host/aer.sh@11 -- # nvmftestinit 00:22:05.392 06:17:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:05.392 06:17:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.392 06:17:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:05.392 06:17:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:05.392 06:17:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:05.392 06:17:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.392 06:17:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.392 06:17:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.392 06:17:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:05.392 06:17:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:05.392 06:17:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:05.392 06:17:11 -- common/autotest_common.sh@10 -- # set +x 00:22:07.293 06:17:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:07.293 06:17:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:07.293 06:17:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:07.293 06:17:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:07.293 06:17:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:07.293 06:17:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:07.294 06:17:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:07.294 06:17:13 -- nvmf/common.sh@294 -- # net_devs=() 00:22:07.294 06:17:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:07.294 06:17:13 -- nvmf/common.sh@295 -- # e810=() 00:22:07.294 06:17:13 -- nvmf/common.sh@295 -- # local -ga e810 00:22:07.294 06:17:13 -- nvmf/common.sh@296 -- # x722=() 00:22:07.294 06:17:13 -- nvmf/common.sh@296 -- # local -ga x722 00:22:07.294 06:17:13 -- nvmf/common.sh@297 -- # mlx=() 00:22:07.294 06:17:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:07.294 06:17:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.294 06:17:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.294 06:17:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.294 06:17:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.294 06:17:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.294 06:17:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.294 06:17:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.294 06:17:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.294 06:17:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.294 06:17:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.294 06:17:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.294 06:17:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:07.294 06:17:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:07.294 06:17:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:07.294 06:17:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:07.294 06:17:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:07.294 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:07.294 06:17:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:07.294 06:17:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:07.294 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:07.294 06:17:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:07.294 06:17:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:07.294 06:17:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.294 06:17:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:07.294 06:17:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.294 06:17:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:07.294 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:07.294 06:17:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.294 06:17:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:07.294 06:17:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.294 06:17:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:07.294 06:17:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.294 06:17:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:07.294 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:07.294 06:17:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.294 06:17:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:07.294 06:17:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:07.294 06:17:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:07.294 06:17:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.294 06:17:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.294 06:17:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.294 06:17:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:07.294 06:17:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.294 06:17:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.294 06:17:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:07.294 06:17:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.294 06:17:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.294 06:17:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:07.294 06:17:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:07.294 06:17:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.294 06:17:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.294 06:17:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.294 06:17:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.294 06:17:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:07.294 06:17:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.294 06:17:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.294 06:17:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.294 06:17:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:07.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:22:07.294 00:22:07.294 --- 10.0.0.2 ping statistics --- 00:22:07.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.294 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:22:07.294 06:17:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:22:07.294 00:22:07.294 --- 10.0.0.1 ping statistics --- 00:22:07.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.294 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:22:07.294 06:17:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.294 06:17:13 -- nvmf/common.sh@410 -- # return 0 00:22:07.294 06:17:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:07.294 06:17:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.294 06:17:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:07.294 06:17:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.294 06:17:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:07.294 06:17:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:07.294 06:17:13 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:07.294 06:17:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:07.294 06:17:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:07.294 06:17:13 -- common/autotest_common.sh@10 -- # set +x 00:22:07.294 06:17:13 -- nvmf/common.sh@469 -- # nvmfpid=1185748 00:22:07.294 06:17:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:07.294 06:17:13 -- nvmf/common.sh@470 -- # waitforlisten 1185748 00:22:07.294 06:17:13 -- common/autotest_common.sh@819 -- # '[' -z 1185748 ']' 00:22:07.294 06:17:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.294 06:17:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:07.294 06:17:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.294 06:17:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:07.294 06:17:13 -- common/autotest_common.sh@10 -- # set +x 00:22:07.294 [2024-07-13 06:17:13.533983] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:07.294 [2024-07-13 06:17:13.534069] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.294 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.294 [2024-07-13 06:17:13.603166] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:07.294 [2024-07-13 06:17:13.720831] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:07.294 [2024-07-13 06:17:13.721011] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.294 [2024-07-13 06:17:13.721030] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.294 [2024-07-13 06:17:13.721043] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.294 [2024-07-13 06:17:13.721097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.294 [2024-07-13 06:17:13.721221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.294 [2024-07-13 06:17:13.721287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:07.294 [2024-07-13 06:17:13.721291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.227 06:17:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:08.227 06:17:14 -- common/autotest_common.sh@852 -- # return 0 00:22:08.227 06:17:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:08.227 06:17:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:08.227 06:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:08.227 06:17:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.227 06:17:14 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:08.227 06:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.227 06:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:08.227 [2024-07-13 06:17:14.475220] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.227 06:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.227 06:17:14 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:08.227 06:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.227 06:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:08.227 Malloc0 00:22:08.227 06:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.227 06:17:14 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:08.227 06:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.227 06:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:08.227 06:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.227 06:17:14 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:08.227 06:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.227 06:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:08.227 06:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.227 06:17:14 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:08.227 06:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.227 06:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:08.227 [2024-07-13 06:17:14.526161] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.227 06:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.227 06:17:14 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:08.227 06:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.227 06:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:08.227 [2024-07-13 06:17:14.533906] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:08.227 [ 00:22:08.227 { 00:22:08.227 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:08.227 "subtype": "Discovery", 00:22:08.227 "listen_addresses": [], 00:22:08.227 "allow_any_host": true, 00:22:08.227 "hosts": [] 00:22:08.227 }, 00:22:08.227 { 00:22:08.227 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.227 "subtype": "NVMe", 00:22:08.227 "listen_addresses": [ 00:22:08.227 { 00:22:08.227 "transport": "TCP", 00:22:08.227 "trtype": "TCP", 00:22:08.227 "adrfam": "IPv4", 00:22:08.227 "traddr": "10.0.0.2", 00:22:08.227 "trsvcid": "4420" 00:22:08.227 } 00:22:08.227 ], 00:22:08.227 "allow_any_host": true, 00:22:08.227 "hosts": [], 00:22:08.227 "serial_number": "SPDK00000000000001", 00:22:08.227 "model_number": "SPDK bdev Controller", 00:22:08.227 "max_namespaces": 2, 00:22:08.227 "min_cntlid": 1, 00:22:08.227 "max_cntlid": 65519, 00:22:08.227 "namespaces": [ 00:22:08.227 { 00:22:08.227 "nsid": 1, 00:22:08.227 "bdev_name": "Malloc0", 00:22:08.227 "name": "Malloc0", 00:22:08.227 "nguid": "C596C70D801C4C898A0A852BF8827419", 00:22:08.227 "uuid": "c596c70d-801c-4c89-8a0a-852bf8827419" 00:22:08.227 } 00:22:08.227 ] 00:22:08.227 } 00:22:08.227 ] 00:22:08.227 06:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.227 06:17:14 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:08.227 06:17:14 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:08.227 06:17:14 -- host/aer.sh@33 -- # aerpid=1185907 00:22:08.227 06:17:14 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:08.227 06:17:14 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:08.227 06:17:14 -- common/autotest_common.sh@1244 -- # local i=0 00:22:08.227 06:17:14 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:08.227 06:17:14 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:22:08.227 06:17:14 -- common/autotest_common.sh@1247 -- # i=1 00:22:08.227 06:17:14 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:22:08.227 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.227 06:17:14 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:08.227 06:17:14 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:22:08.227 06:17:14 -- common/autotest_common.sh@1247 -- # i=2 00:22:08.227 06:17:14 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:22:08.486 06:17:14 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:08.486 06:17:14 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:08.486 06:17:14 -- common/autotest_common.sh@1255 -- # return 0 00:22:08.486 06:17:14 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:08.486 06:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.486 06:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:08.486 Malloc1 00:22:08.486 06:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.486 06:17:14 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:08.486 06:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.486 06:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:08.486 06:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.486 06:17:14 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:08.486 06:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.486 06:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:08.486 Asynchronous Event Request test 00:22:08.486 Attaching to 10.0.0.2 00:22:08.486 Attached to 10.0.0.2 00:22:08.486 Registering asynchronous event callbacks... 00:22:08.486 Starting namespace attribute notice tests for all controllers... 00:22:08.486 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:08.486 aer_cb - Changed Namespace 00:22:08.486 Cleaning up... 00:22:08.486 [ 00:22:08.486 { 00:22:08.486 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:08.486 "subtype": "Discovery", 00:22:08.486 "listen_addresses": [], 00:22:08.486 "allow_any_host": true, 00:22:08.486 "hosts": [] 00:22:08.486 }, 00:22:08.486 { 00:22:08.486 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.486 "subtype": "NVMe", 00:22:08.486 "listen_addresses": [ 00:22:08.486 { 00:22:08.486 "transport": "TCP", 00:22:08.486 "trtype": "TCP", 00:22:08.486 "adrfam": "IPv4", 00:22:08.486 "traddr": "10.0.0.2", 00:22:08.486 "trsvcid": "4420" 00:22:08.486 } 00:22:08.486 ], 00:22:08.486 "allow_any_host": true, 00:22:08.486 "hosts": [], 00:22:08.486 "serial_number": "SPDK00000000000001", 00:22:08.486 "model_number": "SPDK bdev Controller", 00:22:08.486 "max_namespaces": 2, 00:22:08.486 "min_cntlid": 1, 00:22:08.486 "max_cntlid": 65519, 00:22:08.486 "namespaces": [ 00:22:08.486 { 00:22:08.486 "nsid": 1, 00:22:08.486 "bdev_name": "Malloc0", 00:22:08.486 "name": "Malloc0", 00:22:08.486 "nguid": "C596C70D801C4C898A0A852BF8827419", 00:22:08.486 "uuid": "c596c70d-801c-4c89-8a0a-852bf8827419" 00:22:08.486 }, 00:22:08.486 { 00:22:08.486 "nsid": 2, 00:22:08.486 "bdev_name": "Malloc1", 00:22:08.486 "name": "Malloc1", 00:22:08.486 "nguid": "AE1F1581E9CE4B538839C7B30D63CCF1", 00:22:08.486 "uuid": "ae1f1581-e9ce-4b53-8839-c7b30d63ccf1" 00:22:08.486 } 00:22:08.486 ] 00:22:08.486 } 00:22:08.486 ] 00:22:08.486 06:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.486 06:17:14 -- host/aer.sh@43 -- # wait 1185907 00:22:08.486 06:17:14 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:08.486 06:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.486 06:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:08.486 06:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.486 06:17:14 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:08.486 06:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.486 06:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:08.486 06:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.486 06:17:14 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:08.486 06:17:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.486 06:17:14 -- common/autotest_common.sh@10 -- # set +x 00:22:08.486 06:17:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.486 06:17:14 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:08.486 06:17:14 -- host/aer.sh@51 -- # nvmftestfini 00:22:08.487 06:17:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:08.487 06:17:14 -- nvmf/common.sh@116 -- # sync 00:22:08.487 06:17:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:08.487 06:17:14 -- nvmf/common.sh@119 -- # set +e 00:22:08.487 06:17:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:08.487 06:17:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:08.487 rmmod nvme_tcp 00:22:08.487 rmmod nvme_fabrics 00:22:08.487 rmmod nvme_keyring 00:22:08.487 06:17:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:08.487 06:17:14 -- nvmf/common.sh@123 -- # set -e 00:22:08.487 06:17:14 -- nvmf/common.sh@124 -- # return 0 00:22:08.487 06:17:14 -- nvmf/common.sh@477 -- # '[' -n 1185748 ']' 00:22:08.487 06:17:14 -- nvmf/common.sh@478 -- # killprocess 1185748 00:22:08.487 06:17:14 -- common/autotest_common.sh@926 -- # '[' -z 1185748 ']' 00:22:08.487 06:17:14 -- common/autotest_common.sh@930 -- # kill -0 1185748 00:22:08.487 06:17:14 -- common/autotest_common.sh@931 -- # uname 00:22:08.487 06:17:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:08.487 06:17:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1185748 00:22:08.487 06:17:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:08.487 06:17:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:08.487 06:17:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1185748' 00:22:08.487 killing process with pid 1185748 00:22:08.487 06:17:14 -- common/autotest_common.sh@945 -- # kill 1185748 00:22:08.487 [2024-07-13 06:17:14.976654] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:08.487 06:17:14 -- common/autotest_common.sh@950 -- # wait 1185748 00:22:08.746 06:17:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:08.746 06:17:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:08.746 06:17:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:08.746 06:17:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:08.746 06:17:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:08.746 06:17:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.746 06:17:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.746 06:17:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.289 06:17:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:11.289 00:22:11.289 real 0m5.936s 00:22:11.289 user 0m6.812s 00:22:11.289 sys 0m1.820s 00:22:11.289 06:17:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:11.289 06:17:17 -- common/autotest_common.sh@10 -- # set +x 00:22:11.289 ************************************ 00:22:11.289 END TEST nvmf_aer 00:22:11.289 ************************************ 00:22:11.289 06:17:17 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:11.289 06:17:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:11.289 06:17:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:11.289 06:17:17 -- common/autotest_common.sh@10 -- # set +x 00:22:11.289 ************************************ 00:22:11.289 START TEST nvmf_async_init 00:22:11.289 ************************************ 00:22:11.289 06:17:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:11.289 * Looking for test storage... 00:22:11.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:11.290 06:17:17 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.290 06:17:17 -- nvmf/common.sh@7 -- # uname -s 00:22:11.290 06:17:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.290 06:17:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.290 06:17:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.290 06:17:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.290 06:17:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.290 06:17:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.290 06:17:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.290 06:17:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.290 06:17:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.290 06:17:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.290 06:17:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.290 06:17:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.290 06:17:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.290 06:17:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.290 06:17:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.290 06:17:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.290 06:17:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.290 06:17:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.290 06:17:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.290 06:17:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.290 06:17:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.290 06:17:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.290 06:17:17 -- paths/export.sh@5 -- # export PATH 00:22:11.290 06:17:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.290 06:17:17 -- nvmf/common.sh@46 -- # : 0 00:22:11.290 06:17:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:11.290 06:17:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:11.290 06:17:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:11.290 06:17:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.290 06:17:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.290 06:17:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:11.290 06:17:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:11.290 06:17:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:11.290 06:17:17 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:11.290 06:17:17 -- host/async_init.sh@14 -- # null_block_size=512 00:22:11.290 06:17:17 -- host/async_init.sh@15 -- # null_bdev=null0 00:22:11.290 06:17:17 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:11.290 06:17:17 -- host/async_init.sh@20 -- # uuidgen 00:22:11.290 06:17:17 -- host/async_init.sh@20 -- # tr -d - 00:22:11.290 06:17:17 -- host/async_init.sh@20 -- # nguid=408201103a5e4cd2b2dcf4afa2ddcc40 00:22:11.290 06:17:17 -- host/async_init.sh@22 -- # nvmftestinit 00:22:11.290 06:17:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:11.290 06:17:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.290 06:17:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:11.290 06:17:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:11.290 06:17:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:11.290 06:17:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.290 06:17:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:11.290 06:17:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.290 06:17:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:11.290 06:17:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:11.290 06:17:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:11.290 06:17:17 -- common/autotest_common.sh@10 -- # set +x 00:22:13.196 06:17:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:13.196 06:17:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:13.196 06:17:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:13.196 06:17:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:13.196 06:17:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:13.196 06:17:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:13.196 06:17:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:13.196 06:17:19 -- nvmf/common.sh@294 -- # net_devs=() 00:22:13.196 06:17:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:13.196 06:17:19 -- nvmf/common.sh@295 -- # e810=() 00:22:13.196 06:17:19 -- nvmf/common.sh@295 -- # local -ga e810 00:22:13.196 06:17:19 -- nvmf/common.sh@296 -- # x722=() 00:22:13.196 06:17:19 -- nvmf/common.sh@296 -- # local -ga x722 00:22:13.196 06:17:19 -- nvmf/common.sh@297 -- # mlx=() 00:22:13.196 06:17:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:13.196 06:17:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:13.196 06:17:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:13.196 06:17:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:13.196 06:17:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:13.196 06:17:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:13.196 06:17:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:13.196 06:17:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:13.196 06:17:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:13.196 06:17:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:13.196 06:17:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:13.196 06:17:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:13.196 06:17:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:13.196 06:17:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:13.196 06:17:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:13.196 06:17:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:13.196 06:17:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:13.196 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:13.196 06:17:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:13.196 06:17:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:13.196 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:13.196 06:17:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:13.196 06:17:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:13.196 06:17:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.196 06:17:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:13.196 06:17:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.196 06:17:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:13.196 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:13.196 06:17:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.196 06:17:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:13.196 06:17:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:13.196 06:17:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:13.196 06:17:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:13.196 06:17:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:13.196 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:13.196 06:17:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:13.196 06:17:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:13.196 06:17:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:13.196 06:17:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:13.196 06:17:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:13.196 06:17:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:13.196 06:17:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:13.196 06:17:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:13.196 06:17:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:13.196 06:17:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:13.196 06:17:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:13.196 06:17:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:13.196 06:17:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:13.196 06:17:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:13.196 06:17:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:13.196 06:17:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:13.196 06:17:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:13.196 06:17:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:13.196 06:17:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:13.196 06:17:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:13.196 06:17:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:13.196 06:17:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:13.196 06:17:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:13.196 06:17:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:13.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:13.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:22:13.196 00:22:13.196 --- 10.0.0.2 ping statistics --- 00:22:13.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.196 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:22:13.196 06:17:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:13.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:13.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:22:13.196 00:22:13.196 --- 10.0.0.1 ping statistics --- 00:22:13.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:13.196 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:22:13.196 06:17:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:13.196 06:17:19 -- nvmf/common.sh@410 -- # return 0 00:22:13.196 06:17:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:13.196 06:17:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:13.196 06:17:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:13.196 06:17:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:13.196 06:17:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:13.196 06:17:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:13.196 06:17:19 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:13.196 06:17:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:13.196 06:17:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:13.196 06:17:19 -- common/autotest_common.sh@10 -- # set +x 00:22:13.196 06:17:19 -- nvmf/common.sh@469 -- # nvmfpid=1187859 00:22:13.196 06:17:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:13.196 06:17:19 -- nvmf/common.sh@470 -- # waitforlisten 1187859 00:22:13.196 06:17:19 -- common/autotest_common.sh@819 -- # '[' -z 1187859 ']' 00:22:13.196 06:17:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.196 06:17:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:13.196 06:17:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.196 06:17:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:13.196 06:17:19 -- common/autotest_common.sh@10 -- # set +x 00:22:13.196 [2024-07-13 06:17:19.429319] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:13.196 [2024-07-13 06:17:19.429394] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.196 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.196 [2024-07-13 06:17:19.499677] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.196 [2024-07-13 06:17:19.608756] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:13.196 [2024-07-13 06:17:19.608932] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.196 [2024-07-13 06:17:19.608965] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.196 [2024-07-13 06:17:19.608977] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.196 [2024-07-13 06:17:19.609006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.138 06:17:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:14.138 06:17:20 -- common/autotest_common.sh@852 -- # return 0 00:22:14.139 06:17:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:14.139 06:17:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:14.139 06:17:20 -- common/autotest_common.sh@10 -- # set +x 00:22:14.139 06:17:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.139 06:17:20 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:14.139 06:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.139 06:17:20 -- common/autotest_common.sh@10 -- # set +x 00:22:14.139 [2024-07-13 06:17:20.462641] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.139 06:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.139 06:17:20 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:14.139 06:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.139 06:17:20 -- common/autotest_common.sh@10 -- # set +x 00:22:14.139 null0 00:22:14.139 06:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.139 06:17:20 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:14.139 06:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.139 06:17:20 -- common/autotest_common.sh@10 -- # set +x 00:22:14.139 06:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.139 06:17:20 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:14.139 06:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.139 06:17:20 -- common/autotest_common.sh@10 -- # set +x 00:22:14.139 06:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.139 06:17:20 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 408201103a5e4cd2b2dcf4afa2ddcc40 00:22:14.139 06:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.139 06:17:20 -- common/autotest_common.sh@10 -- # set +x 00:22:14.139 06:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.139 06:17:20 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:14.139 06:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.139 06:17:20 -- common/autotest_common.sh@10 -- # set +x 00:22:14.139 [2024-07-13 06:17:20.502877] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.139 06:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.139 06:17:20 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:14.139 06:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.139 06:17:20 -- common/autotest_common.sh@10 -- # set +x 00:22:14.399 nvme0n1 00:22:14.399 06:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.399 06:17:20 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:14.399 06:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.399 06:17:20 -- common/autotest_common.sh@10 -- # set +x 00:22:14.399 [ 00:22:14.399 { 00:22:14.399 "name": "nvme0n1", 00:22:14.399 "aliases": [ 00:22:14.399 "40820110-3a5e-4cd2-b2dc-f4afa2ddcc40" 00:22:14.399 ], 00:22:14.399 "product_name": "NVMe disk", 00:22:14.399 "block_size": 512, 00:22:14.399 "num_blocks": 2097152, 00:22:14.399 "uuid": "40820110-3a5e-4cd2-b2dc-f4afa2ddcc40", 00:22:14.399 "assigned_rate_limits": { 00:22:14.399 "rw_ios_per_sec": 0, 00:22:14.399 "rw_mbytes_per_sec": 0, 00:22:14.399 "r_mbytes_per_sec": 0, 00:22:14.399 "w_mbytes_per_sec": 0 00:22:14.399 }, 00:22:14.399 "claimed": false, 00:22:14.399 "zoned": false, 00:22:14.399 "supported_io_types": { 00:22:14.399 "read": true, 00:22:14.399 "write": true, 00:22:14.399 "unmap": false, 00:22:14.399 "write_zeroes": true, 00:22:14.399 "flush": true, 00:22:14.399 "reset": true, 00:22:14.399 "compare": true, 00:22:14.399 "compare_and_write": true, 00:22:14.399 "abort": true, 00:22:14.399 "nvme_admin": true, 00:22:14.399 "nvme_io": true 00:22:14.399 }, 00:22:14.399 "driver_specific": { 00:22:14.399 "nvme": [ 00:22:14.399 { 00:22:14.399 "trid": { 00:22:14.399 "trtype": "TCP", 00:22:14.399 "adrfam": "IPv4", 00:22:14.399 "traddr": "10.0.0.2", 00:22:14.399 "trsvcid": "4420", 00:22:14.399 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:14.399 }, 00:22:14.399 "ctrlr_data": { 00:22:14.399 "cntlid": 1, 00:22:14.399 "vendor_id": "0x8086", 00:22:14.399 "model_number": "SPDK bdev Controller", 00:22:14.399 "serial_number": "00000000000000000000", 00:22:14.399 "firmware_revision": "24.01.1", 00:22:14.399 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:14.399 "oacs": { 00:22:14.399 "security": 0, 00:22:14.399 "format": 0, 00:22:14.399 "firmware": 0, 00:22:14.399 "ns_manage": 0 00:22:14.399 }, 00:22:14.399 "multi_ctrlr": true, 00:22:14.399 "ana_reporting": false 00:22:14.399 }, 00:22:14.399 "vs": { 00:22:14.399 "nvme_version": "1.3" 00:22:14.399 }, 00:22:14.399 "ns_data": { 00:22:14.399 "id": 1, 00:22:14.399 "can_share": true 00:22:14.399 } 00:22:14.399 } 00:22:14.399 ], 00:22:14.399 "mp_policy": "active_passive" 00:22:14.399 } 00:22:14.399 } 00:22:14.399 ] 00:22:14.399 06:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.399 06:17:20 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:14.399 06:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.399 06:17:20 -- common/autotest_common.sh@10 -- # set +x 00:22:14.399 [2024-07-13 06:17:20.751449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:14.399 [2024-07-13 06:17:20.751543] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebaa80 (9): Bad file descriptor 00:22:14.399 [2024-07-13 06:17:20.884011] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:14.399 06:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.399 06:17:20 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:14.399 06:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.399 06:17:20 -- common/autotest_common.sh@10 -- # set +x 00:22:14.399 [ 00:22:14.399 { 00:22:14.399 "name": "nvme0n1", 00:22:14.399 "aliases": [ 00:22:14.399 "40820110-3a5e-4cd2-b2dc-f4afa2ddcc40" 00:22:14.399 ], 00:22:14.399 "product_name": "NVMe disk", 00:22:14.399 "block_size": 512, 00:22:14.399 "num_blocks": 2097152, 00:22:14.399 "uuid": "40820110-3a5e-4cd2-b2dc-f4afa2ddcc40", 00:22:14.399 "assigned_rate_limits": { 00:22:14.399 "rw_ios_per_sec": 0, 00:22:14.400 "rw_mbytes_per_sec": 0, 00:22:14.400 "r_mbytes_per_sec": 0, 00:22:14.400 "w_mbytes_per_sec": 0 00:22:14.400 }, 00:22:14.400 "claimed": false, 00:22:14.400 "zoned": false, 00:22:14.400 "supported_io_types": { 00:22:14.400 "read": true, 00:22:14.400 "write": true, 00:22:14.400 "unmap": false, 00:22:14.400 "write_zeroes": true, 00:22:14.400 "flush": true, 00:22:14.400 "reset": true, 00:22:14.400 "compare": true, 00:22:14.400 "compare_and_write": true, 00:22:14.400 "abort": true, 00:22:14.400 "nvme_admin": true, 00:22:14.400 "nvme_io": true 00:22:14.400 }, 00:22:14.400 "driver_specific": { 00:22:14.400 "nvme": [ 00:22:14.400 { 00:22:14.400 "trid": { 00:22:14.400 "trtype": "TCP", 00:22:14.400 "adrfam": "IPv4", 00:22:14.400 "traddr": "10.0.0.2", 00:22:14.400 "trsvcid": "4420", 00:22:14.400 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:14.400 }, 00:22:14.400 "ctrlr_data": { 00:22:14.400 "cntlid": 2, 00:22:14.400 "vendor_id": "0x8086", 00:22:14.400 "model_number": "SPDK bdev Controller", 00:22:14.400 "serial_number": "00000000000000000000", 00:22:14.400 "firmware_revision": "24.01.1", 00:22:14.400 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:14.400 "oacs": { 00:22:14.400 "security": 0, 00:22:14.400 "format": 0, 00:22:14.400 "firmware": 0, 00:22:14.400 "ns_manage": 0 00:22:14.400 }, 00:22:14.400 "multi_ctrlr": true, 00:22:14.400 "ana_reporting": false 00:22:14.400 }, 00:22:14.400 "vs": { 00:22:14.400 "nvme_version": "1.3" 00:22:14.400 }, 00:22:14.400 "ns_data": { 00:22:14.400 "id": 1, 00:22:14.400 "can_share": true 00:22:14.400 } 00:22:14.400 } 00:22:14.400 ], 00:22:14.400 "mp_policy": "active_passive" 00:22:14.400 } 00:22:14.400 } 00:22:14.400 ] 00:22:14.400 06:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.400 06:17:20 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.400 06:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.400 06:17:20 -- common/autotest_common.sh@10 -- # set +x 00:22:14.659 06:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.659 06:17:20 -- host/async_init.sh@53 -- # mktemp 00:22:14.659 06:17:20 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.BbFbI7NPZo 00:22:14.659 06:17:20 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:14.659 06:17:20 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.BbFbI7NPZo 00:22:14.659 06:17:20 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:14.659 06:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.659 06:17:20 -- common/autotest_common.sh@10 -- # set +x 00:22:14.659 06:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.659 06:17:20 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:14.659 06:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.659 06:17:20 -- common/autotest_common.sh@10 -- # set +x 00:22:14.659 [2024-07-13 06:17:20.932078] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:14.659 [2024-07-13 06:17:20.932207] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:14.659 06:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.659 06:17:20 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BbFbI7NPZo 00:22:14.659 06:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.659 06:17:20 -- common/autotest_common.sh@10 -- # set +x 00:22:14.659 06:17:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.659 06:17:20 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BbFbI7NPZo 00:22:14.659 06:17:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.659 06:17:20 -- common/autotest_common.sh@10 -- # set +x 00:22:14.659 [2024-07-13 06:17:20.948094] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:14.659 nvme0n1 00:22:14.659 06:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.659 06:17:21 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:14.659 06:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.659 06:17:21 -- common/autotest_common.sh@10 -- # set +x 00:22:14.659 [ 00:22:14.659 { 00:22:14.659 "name": "nvme0n1", 00:22:14.659 "aliases": [ 00:22:14.659 "40820110-3a5e-4cd2-b2dc-f4afa2ddcc40" 00:22:14.659 ], 00:22:14.659 "product_name": "NVMe disk", 00:22:14.659 "block_size": 512, 00:22:14.659 "num_blocks": 2097152, 00:22:14.659 "uuid": "40820110-3a5e-4cd2-b2dc-f4afa2ddcc40", 00:22:14.659 "assigned_rate_limits": { 00:22:14.659 "rw_ios_per_sec": 0, 00:22:14.659 "rw_mbytes_per_sec": 0, 00:22:14.659 "r_mbytes_per_sec": 0, 00:22:14.659 "w_mbytes_per_sec": 0 00:22:14.659 }, 00:22:14.660 "claimed": false, 00:22:14.660 "zoned": false, 00:22:14.660 "supported_io_types": { 00:22:14.660 "read": true, 00:22:14.660 "write": true, 00:22:14.660 "unmap": false, 00:22:14.660 "write_zeroes": true, 00:22:14.660 "flush": true, 00:22:14.660 "reset": true, 00:22:14.660 "compare": true, 00:22:14.660 "compare_and_write": true, 00:22:14.660 "abort": true, 00:22:14.660 "nvme_admin": true, 00:22:14.660 "nvme_io": true 00:22:14.660 }, 00:22:14.660 "driver_specific": { 00:22:14.660 "nvme": [ 00:22:14.660 { 00:22:14.660 "trid": { 00:22:14.660 "trtype": "TCP", 00:22:14.660 "adrfam": "IPv4", 00:22:14.660 "traddr": "10.0.0.2", 00:22:14.660 "trsvcid": "4421", 00:22:14.660 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:14.660 }, 00:22:14.660 "ctrlr_data": { 00:22:14.660 "cntlid": 3, 00:22:14.660 "vendor_id": "0x8086", 00:22:14.660 "model_number": "SPDK bdev Controller", 00:22:14.660 "serial_number": "00000000000000000000", 00:22:14.660 "firmware_revision": "24.01.1", 00:22:14.660 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:14.660 "oacs": { 00:22:14.660 "security": 0, 00:22:14.660 "format": 0, 00:22:14.660 "firmware": 0, 00:22:14.660 "ns_manage": 0 00:22:14.660 }, 00:22:14.660 "multi_ctrlr": true, 00:22:14.660 "ana_reporting": false 00:22:14.660 }, 00:22:14.660 "vs": { 00:22:14.660 "nvme_version": "1.3" 00:22:14.660 }, 00:22:14.660 "ns_data": { 00:22:14.660 "id": 1, 00:22:14.660 "can_share": true 00:22:14.660 } 00:22:14.660 } 00:22:14.660 ], 00:22:14.660 "mp_policy": "active_passive" 00:22:14.660 } 00:22:14.660 } 00:22:14.660 ] 00:22:14.660 06:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.660 06:17:21 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:14.660 06:17:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:14.660 06:17:21 -- common/autotest_common.sh@10 -- # set +x 00:22:14.660 06:17:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:14.660 06:17:21 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.BbFbI7NPZo 00:22:14.660 06:17:21 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:14.660 06:17:21 -- host/async_init.sh@78 -- # nvmftestfini 00:22:14.660 06:17:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:14.660 06:17:21 -- nvmf/common.sh@116 -- # sync 00:22:14.660 06:17:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:14.660 06:17:21 -- nvmf/common.sh@119 -- # set +e 00:22:14.660 06:17:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:14.660 06:17:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:14.660 rmmod nvme_tcp 00:22:14.660 rmmod nvme_fabrics 00:22:14.660 rmmod nvme_keyring 00:22:14.660 06:17:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:14.660 06:17:21 -- nvmf/common.sh@123 -- # set -e 00:22:14.660 06:17:21 -- nvmf/common.sh@124 -- # return 0 00:22:14.660 06:17:21 -- nvmf/common.sh@477 -- # '[' -n 1187859 ']' 00:22:14.660 06:17:21 -- nvmf/common.sh@478 -- # killprocess 1187859 00:22:14.660 06:17:21 -- common/autotest_common.sh@926 -- # '[' -z 1187859 ']' 00:22:14.660 06:17:21 -- common/autotest_common.sh@930 -- # kill -0 1187859 00:22:14.660 06:17:21 -- common/autotest_common.sh@931 -- # uname 00:22:14.660 06:17:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:14.660 06:17:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1187859 00:22:14.660 06:17:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:14.660 06:17:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:14.660 06:17:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1187859' 00:22:14.660 killing process with pid 1187859 00:22:14.660 06:17:21 -- common/autotest_common.sh@945 -- # kill 1187859 00:22:14.660 06:17:21 -- common/autotest_common.sh@950 -- # wait 1187859 00:22:14.918 06:17:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:14.918 06:17:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:14.918 06:17:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:14.918 06:17:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:14.918 06:17:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:14.918 06:17:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.918 06:17:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.918 06:17:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.457 06:17:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:17.457 00:22:17.457 real 0m6.079s 00:22:17.457 user 0m2.981s 00:22:17.457 sys 0m1.753s 00:22:17.457 06:17:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:17.457 06:17:23 -- common/autotest_common.sh@10 -- # set +x 00:22:17.457 ************************************ 00:22:17.457 END TEST nvmf_async_init 00:22:17.457 ************************************ 00:22:17.458 06:17:23 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:17.458 06:17:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:17.458 06:17:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:17.458 06:17:23 -- common/autotest_common.sh@10 -- # set +x 00:22:17.458 ************************************ 00:22:17.458 START TEST dma 00:22:17.458 ************************************ 00:22:17.458 06:17:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:17.458 * Looking for test storage... 00:22:17.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:17.458 06:17:23 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.458 06:17:23 -- nvmf/common.sh@7 -- # uname -s 00:22:17.458 06:17:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.458 06:17:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.458 06:17:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.458 06:17:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.458 06:17:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.458 06:17:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.458 06:17:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.458 06:17:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.458 06:17:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.458 06:17:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.458 06:17:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.458 06:17:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.458 06:17:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.458 06:17:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.458 06:17:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.458 06:17:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.458 06:17:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.458 06:17:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.458 06:17:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.458 06:17:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.458 06:17:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.458 06:17:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.458 06:17:23 -- paths/export.sh@5 -- # export PATH 00:22:17.458 06:17:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.458 06:17:23 -- nvmf/common.sh@46 -- # : 0 00:22:17.458 06:17:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:17.458 06:17:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:17.458 06:17:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:17.458 06:17:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.458 06:17:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.458 06:17:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:17.458 06:17:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:17.458 06:17:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:17.458 06:17:23 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:17.458 06:17:23 -- host/dma.sh@13 -- # exit 0 00:22:17.458 00:22:17.458 real 0m0.068s 00:22:17.458 user 0m0.025s 00:22:17.458 sys 0m0.050s 00:22:17.458 06:17:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:17.458 06:17:23 -- common/autotest_common.sh@10 -- # set +x 00:22:17.458 ************************************ 00:22:17.458 END TEST dma 00:22:17.458 ************************************ 00:22:17.458 06:17:23 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:17.458 06:17:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:17.458 06:17:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:17.458 06:17:23 -- common/autotest_common.sh@10 -- # set +x 00:22:17.458 ************************************ 00:22:17.458 START TEST nvmf_identify 00:22:17.458 ************************************ 00:22:17.458 06:17:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:17.458 * Looking for test storage... 00:22:17.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:17.458 06:17:23 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.458 06:17:23 -- nvmf/common.sh@7 -- # uname -s 00:22:17.458 06:17:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.458 06:17:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.458 06:17:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.458 06:17:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.458 06:17:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.458 06:17:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.458 06:17:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.458 06:17:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.458 06:17:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.458 06:17:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.458 06:17:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.458 06:17:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:17.458 06:17:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.458 06:17:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.458 06:17:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.458 06:17:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.458 06:17:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.458 06:17:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.458 06:17:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.458 06:17:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.458 06:17:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.458 06:17:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.458 06:17:23 -- paths/export.sh@5 -- # export PATH 00:22:17.458 06:17:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.458 06:17:23 -- nvmf/common.sh@46 -- # : 0 00:22:17.458 06:17:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:17.458 06:17:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:17.458 06:17:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:17.458 06:17:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.458 06:17:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.458 06:17:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:17.458 06:17:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:17.458 06:17:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:17.458 06:17:23 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:17.458 06:17:23 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:17.458 06:17:23 -- host/identify.sh@14 -- # nvmftestinit 00:22:17.458 06:17:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:17.458 06:17:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.458 06:17:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:17.458 06:17:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:17.458 06:17:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:17.458 06:17:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.458 06:17:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:17.458 06:17:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.458 06:17:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:17.458 06:17:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:17.458 06:17:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:17.458 06:17:23 -- common/autotest_common.sh@10 -- # set +x 00:22:19.366 06:17:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:19.366 06:17:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:19.366 06:17:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:19.366 06:17:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:19.366 06:17:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:19.366 06:17:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:19.366 06:17:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:19.366 06:17:25 -- nvmf/common.sh@294 -- # net_devs=() 00:22:19.366 06:17:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:19.366 06:17:25 -- nvmf/common.sh@295 -- # e810=() 00:22:19.366 06:17:25 -- nvmf/common.sh@295 -- # local -ga e810 00:22:19.366 06:17:25 -- nvmf/common.sh@296 -- # x722=() 00:22:19.366 06:17:25 -- nvmf/common.sh@296 -- # local -ga x722 00:22:19.366 06:17:25 -- nvmf/common.sh@297 -- # mlx=() 00:22:19.366 06:17:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:19.366 06:17:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:19.366 06:17:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:19.366 06:17:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:19.366 06:17:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:19.366 06:17:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:19.366 06:17:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:19.366 06:17:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:19.366 06:17:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:19.366 06:17:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:19.366 06:17:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:19.366 06:17:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:19.366 06:17:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:19.366 06:17:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:19.366 06:17:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:19.366 06:17:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:19.366 06:17:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:19.366 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:19.366 06:17:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:19.366 06:17:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:19.366 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:19.366 06:17:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:19.366 06:17:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:19.366 06:17:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.366 06:17:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:19.366 06:17:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.366 06:17:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:19.366 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:19.366 06:17:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.366 06:17:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:19.366 06:17:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:19.366 06:17:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:19.366 06:17:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:19.366 06:17:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:19.366 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:19.366 06:17:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:19.366 06:17:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:19.366 06:17:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:19.366 06:17:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:19.366 06:17:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:19.366 06:17:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:19.366 06:17:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:19.366 06:17:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:19.366 06:17:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:19.366 06:17:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:19.366 06:17:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:19.366 06:17:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:19.366 06:17:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:19.366 06:17:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:19.366 06:17:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:19.366 06:17:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:19.366 06:17:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:19.366 06:17:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:19.366 06:17:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:19.366 06:17:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:19.366 06:17:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:19.366 06:17:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:19.366 06:17:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:19.366 06:17:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:19.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:19.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:22:19.366 00:22:19.366 --- 10.0.0.2 ping statistics --- 00:22:19.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.366 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:22:19.366 06:17:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:19.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:19.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:22:19.366 00:22:19.366 --- 10.0.0.1 ping statistics --- 00:22:19.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:19.366 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:22:19.366 06:17:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:19.366 06:17:25 -- nvmf/common.sh@410 -- # return 0 00:22:19.366 06:17:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:19.366 06:17:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:19.366 06:17:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:19.366 06:17:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:19.366 06:17:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:19.366 06:17:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:19.366 06:17:25 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:19.366 06:17:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:19.366 06:17:25 -- common/autotest_common.sh@10 -- # set +x 00:22:19.366 06:17:25 -- host/identify.sh@19 -- # nvmfpid=1190122 00:22:19.366 06:17:25 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:19.366 06:17:25 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:19.366 06:17:25 -- host/identify.sh@23 -- # waitforlisten 1190122 00:22:19.366 06:17:25 -- common/autotest_common.sh@819 -- # '[' -z 1190122 ']' 00:22:19.366 06:17:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:19.366 06:17:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:19.366 06:17:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:19.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:19.366 06:17:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:19.366 06:17:25 -- common/autotest_common.sh@10 -- # set +x 00:22:19.366 [2024-07-13 06:17:25.644285] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:19.366 [2024-07-13 06:17:25.644361] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:19.366 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.366 [2024-07-13 06:17:25.709776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:19.367 [2024-07-13 06:17:25.826093] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:19.367 [2024-07-13 06:17:25.826246] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:19.367 [2024-07-13 06:17:25.826265] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:19.367 [2024-07-13 06:17:25.826279] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:19.367 [2024-07-13 06:17:25.826362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:19.367 [2024-07-13 06:17:25.826417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:19.367 [2024-07-13 06:17:25.826535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:19.367 [2024-07-13 06:17:25.826538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.304 06:17:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:20.304 06:17:26 -- common/autotest_common.sh@852 -- # return 0 00:22:20.304 06:17:26 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:20.304 06:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.304 06:17:26 -- common/autotest_common.sh@10 -- # set +x 00:22:20.304 [2024-07-13 06:17:26.576266] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.304 06:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.304 06:17:26 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:20.304 06:17:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:20.304 06:17:26 -- common/autotest_common.sh@10 -- # set +x 00:22:20.304 06:17:26 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:20.304 06:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.304 06:17:26 -- common/autotest_common.sh@10 -- # set +x 00:22:20.304 Malloc0 00:22:20.304 06:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.304 06:17:26 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:20.304 06:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.304 06:17:26 -- common/autotest_common.sh@10 -- # set +x 00:22:20.304 06:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.304 06:17:26 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:20.304 06:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.304 06:17:26 -- common/autotest_common.sh@10 -- # set +x 00:22:20.304 06:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.304 06:17:26 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:20.304 06:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.304 06:17:26 -- common/autotest_common.sh@10 -- # set +x 00:22:20.304 [2024-07-13 06:17:26.647335] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.304 06:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.304 06:17:26 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:20.304 06:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.304 06:17:26 -- common/autotest_common.sh@10 -- # set +x 00:22:20.304 06:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.304 06:17:26 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:20.304 06:17:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.304 06:17:26 -- common/autotest_common.sh@10 -- # set +x 00:22:20.304 [2024-07-13 06:17:26.663114] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:20.304 [ 00:22:20.304 { 00:22:20.304 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:20.304 "subtype": "Discovery", 00:22:20.304 "listen_addresses": [ 00:22:20.304 { 00:22:20.304 "transport": "TCP", 00:22:20.304 "trtype": "TCP", 00:22:20.304 "adrfam": "IPv4", 00:22:20.304 "traddr": "10.0.0.2", 00:22:20.304 "trsvcid": "4420" 00:22:20.304 } 00:22:20.304 ], 00:22:20.304 "allow_any_host": true, 00:22:20.304 "hosts": [] 00:22:20.304 }, 00:22:20.304 { 00:22:20.304 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:20.304 "subtype": "NVMe", 00:22:20.304 "listen_addresses": [ 00:22:20.304 { 00:22:20.304 "transport": "TCP", 00:22:20.304 "trtype": "TCP", 00:22:20.304 "adrfam": "IPv4", 00:22:20.304 "traddr": "10.0.0.2", 00:22:20.304 "trsvcid": "4420" 00:22:20.304 } 00:22:20.304 ], 00:22:20.304 "allow_any_host": true, 00:22:20.304 "hosts": [], 00:22:20.304 "serial_number": "SPDK00000000000001", 00:22:20.304 "model_number": "SPDK bdev Controller", 00:22:20.304 "max_namespaces": 32, 00:22:20.304 "min_cntlid": 1, 00:22:20.304 "max_cntlid": 65519, 00:22:20.304 "namespaces": [ 00:22:20.304 { 00:22:20.304 "nsid": 1, 00:22:20.304 "bdev_name": "Malloc0", 00:22:20.304 "name": "Malloc0", 00:22:20.304 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:20.304 "eui64": "ABCDEF0123456789", 00:22:20.304 "uuid": "0c3880c2-d863-411b-a471-fe540f087868" 00:22:20.304 } 00:22:20.304 ] 00:22:20.304 } 00:22:20.304 ] 00:22:20.304 06:17:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.304 06:17:26 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:20.304 [2024-07-13 06:17:26.684410] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:20.304 [2024-07-13 06:17:26.684447] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190282 ] 00:22:20.304 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.304 [2024-07-13 06:17:26.716141] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:20.304 [2024-07-13 06:17:26.716219] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:20.304 [2024-07-13 06:17:26.716230] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:20.304 [2024-07-13 06:17:26.716244] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:20.304 [2024-07-13 06:17:26.716256] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:20.304 [2024-07-13 06:17:26.719922] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:20.304 [2024-07-13 06:17:26.719991] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x518e10 0 00:22:20.304 [2024-07-13 06:17:26.727881] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:20.304 [2024-07-13 06:17:26.727900] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:20.304 [2024-07-13 06:17:26.727908] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:20.304 [2024-07-13 06:17:26.727914] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:20.304 [2024-07-13 06:17:26.727963] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.304 [2024-07-13 06:17:26.727975] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.304 [2024-07-13 06:17:26.727982] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x518e10) 00:22:20.304 [2024-07-13 06:17:26.727999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:20.304 [2024-07-13 06:17:26.728025] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x598bf0, cid 0, qid 0 00:22:20.304 [2024-07-13 06:17:26.735878] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.304 [2024-07-13 06:17:26.735896] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.304 [2024-07-13 06:17:26.735903] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.304 [2024-07-13 06:17:26.735910] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x598bf0) on tqpair=0x518e10 00:22:20.304 [2024-07-13 06:17:26.735929] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:20.304 [2024-07-13 06:17:26.735940] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:20.304 [2024-07-13 06:17:26.735949] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:20.304 [2024-07-13 06:17:26.735967] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.304 [2024-07-13 06:17:26.735976] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.304 [2024-07-13 06:17:26.735982] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x518e10) 00:22:20.304 [2024-07-13 06:17:26.735993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.304 [2024-07-13 06:17:26.736015] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x598bf0, cid 0, qid 0 00:22:20.304 [2024-07-13 06:17:26.736172] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.304 [2024-07-13 06:17:26.736185] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.304 [2024-07-13 06:17:26.736191] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.304 [2024-07-13 06:17:26.736198] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x598bf0) on tqpair=0x518e10 00:22:20.304 [2024-07-13 06:17:26.736207] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:20.304 [2024-07-13 06:17:26.736219] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:20.304 [2024-07-13 06:17:26.736231] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.304 [2024-07-13 06:17:26.736238] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.304 [2024-07-13 06:17:26.736249] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x518e10) 00:22:20.304 [2024-07-13 06:17:26.736260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.305 [2024-07-13 06:17:26.736280] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x598bf0, cid 0, qid 0 00:22:20.305 [2024-07-13 06:17:26.736399] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.305 [2024-07-13 06:17:26.736413] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.305 [2024-07-13 06:17:26.736420] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.736426] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x598bf0) on tqpair=0x518e10 00:22:20.305 [2024-07-13 06:17:26.736434] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:20.305 [2024-07-13 06:17:26.736448] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:20.305 [2024-07-13 06:17:26.736460] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.736468] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.736474] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x518e10) 00:22:20.305 [2024-07-13 06:17:26.736484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.305 [2024-07-13 06:17:26.736504] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x598bf0, cid 0, qid 0 00:22:20.305 [2024-07-13 06:17:26.736611] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.305 [2024-07-13 06:17:26.736626] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.305 [2024-07-13 06:17:26.736632] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.736639] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x598bf0) on tqpair=0x518e10 00:22:20.305 [2024-07-13 06:17:26.736647] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:20.305 [2024-07-13 06:17:26.736664] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.736673] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.736679] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x518e10) 00:22:20.305 [2024-07-13 06:17:26.736689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.305 [2024-07-13 06:17:26.736709] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x598bf0, cid 0, qid 0 00:22:20.305 [2024-07-13 06:17:26.736826] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.305 [2024-07-13 06:17:26.736838] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.305 [2024-07-13 06:17:26.736859] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.736875] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x598bf0) on tqpair=0x518e10 00:22:20.305 [2024-07-13 06:17:26.736884] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:20.305 [2024-07-13 06:17:26.736893] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:20.305 [2024-07-13 06:17:26.736907] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:20.305 [2024-07-13 06:17:26.737017] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:20.305 [2024-07-13 06:17:26.737025] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:20.305 [2024-07-13 06:17:26.737044] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.737052] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.737059] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x518e10) 00:22:20.305 [2024-07-13 06:17:26.737069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.305 [2024-07-13 06:17:26.737091] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x598bf0, cid 0, qid 0 00:22:20.305 [2024-07-13 06:17:26.737231] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.305 [2024-07-13 06:17:26.737244] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.305 [2024-07-13 06:17:26.737251] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.737257] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x598bf0) on tqpair=0x518e10 00:22:20.305 [2024-07-13 06:17:26.737265] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:20.305 [2024-07-13 06:17:26.737281] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.737290] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.737296] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x518e10) 00:22:20.305 [2024-07-13 06:17:26.737306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.305 [2024-07-13 06:17:26.737325] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x598bf0, cid 0, qid 0 00:22:20.305 [2024-07-13 06:17:26.737443] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.305 [2024-07-13 06:17:26.737455] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.305 [2024-07-13 06:17:26.737461] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.737468] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x598bf0) on tqpair=0x518e10 00:22:20.305 [2024-07-13 06:17:26.737476] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:20.305 [2024-07-13 06:17:26.737484] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:20.305 [2024-07-13 06:17:26.737497] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:20.305 [2024-07-13 06:17:26.737515] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:20.305 [2024-07-13 06:17:26.737530] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.737538] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.737544] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x518e10) 00:22:20.305 [2024-07-13 06:17:26.737554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.305 [2024-07-13 06:17:26.737574] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x598bf0, cid 0, qid 0 00:22:20.305 [2024-07-13 06:17:26.737730] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.305 [2024-07-13 06:17:26.737758] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.305 [2024-07-13 06:17:26.737765] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.737772] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x518e10): datao=0, datal=4096, cccid=0 00:22:20.305 [2024-07-13 06:17:26.737784] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x598bf0) on tqpair(0x518e10): expected_datao=0, payload_size=4096 00:22:20.305 [2024-07-13 06:17:26.737804] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.737813] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.781877] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.305 [2024-07-13 06:17:26.781895] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.305 [2024-07-13 06:17:26.781902] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.781909] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x598bf0) on tqpair=0x518e10 00:22:20.305 [2024-07-13 06:17:26.781921] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:20.305 [2024-07-13 06:17:26.781930] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:20.305 [2024-07-13 06:17:26.781938] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:20.305 [2024-07-13 06:17:26.781946] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:20.305 [2024-07-13 06:17:26.781953] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:20.305 [2024-07-13 06:17:26.781961] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:20.305 [2024-07-13 06:17:26.781981] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:20.305 [2024-07-13 06:17:26.781995] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.782002] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.782008] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x518e10) 00:22:20.305 [2024-07-13 06:17:26.782020] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:20.305 [2024-07-13 06:17:26.782043] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x598bf0, cid 0, qid 0 00:22:20.305 [2024-07-13 06:17:26.782193] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.305 [2024-07-13 06:17:26.782206] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.305 [2024-07-13 06:17:26.782212] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.782219] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x598bf0) on tqpair=0x518e10 00:22:20.305 [2024-07-13 06:17:26.782231] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.782238] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.305 [2024-07-13 06:17:26.782244] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x518e10) 00:22:20.305 [2024-07-13 06:17:26.782254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.306 [2024-07-13 06:17:26.782264] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.782271] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.782277] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x518e10) 00:22:20.306 [2024-07-13 06:17:26.782285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.306 [2024-07-13 06:17:26.782295] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.782301] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.782307] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x518e10) 00:22:20.306 [2024-07-13 06:17:26.782320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.306 [2024-07-13 06:17:26.782330] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.782337] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.782343] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x518e10) 00:22:20.306 [2024-07-13 06:17:26.782366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.306 [2024-07-13 06:17:26.782374] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:20.306 [2024-07-13 06:17:26.782392] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:20.306 [2024-07-13 06:17:26.782404] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.782411] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.782417] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x518e10) 00:22:20.306 [2024-07-13 06:17:26.782427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.306 [2024-07-13 06:17:26.782448] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x598bf0, cid 0, qid 0 00:22:20.306 [2024-07-13 06:17:26.782473] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x598d50, cid 1, qid 0 00:22:20.306 [2024-07-13 06:17:26.782481] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x598eb0, cid 2, qid 0 00:22:20.306 [2024-07-13 06:17:26.782488] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x599010, cid 3, qid 0 00:22:20.306 [2024-07-13 06:17:26.782496] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x599170, cid 4, qid 0 00:22:20.306 [2024-07-13 06:17:26.782643] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.306 [2024-07-13 06:17:26.782658] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.306 [2024-07-13 06:17:26.782664] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.782671] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x599170) on tqpair=0x518e10 00:22:20.306 [2024-07-13 06:17:26.782680] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:20.306 [2024-07-13 06:17:26.782688] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:20.306 [2024-07-13 06:17:26.782706] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.782715] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.782721] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x518e10) 00:22:20.306 [2024-07-13 06:17:26.782732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.306 [2024-07-13 06:17:26.782752] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x599170, cid 4, qid 0 00:22:20.306 [2024-07-13 06:17:26.782933] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.306 [2024-07-13 06:17:26.782948] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.306 [2024-07-13 06:17:26.782954] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.782961] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x518e10): datao=0, datal=4096, cccid=4 00:22:20.306 [2024-07-13 06:17:26.782969] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x599170) on tqpair(0x518e10): expected_datao=0, payload_size=4096 00:22:20.306 [2024-07-13 06:17:26.782980] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.782991] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.783020] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.306 [2024-07-13 06:17:26.783031] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.306 [2024-07-13 06:17:26.783037] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.783044] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x599170) on tqpair=0x518e10 00:22:20.306 [2024-07-13 06:17:26.783062] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:20.306 [2024-07-13 06:17:26.783097] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.783108] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.783114] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x518e10) 00:22:20.306 [2024-07-13 06:17:26.783125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.306 [2024-07-13 06:17:26.783136] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.783143] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.783150] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x518e10) 00:22:20.306 [2024-07-13 06:17:26.783159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.306 [2024-07-13 06:17:26.783200] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x599170, cid 4, qid 0 00:22:20.306 [2024-07-13 06:17:26.783212] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5992d0, cid 5, qid 0 00:22:20.306 [2024-07-13 06:17:26.783388] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.306 [2024-07-13 06:17:26.783400] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.306 [2024-07-13 06:17:26.783407] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.783413] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x518e10): datao=0, datal=1024, cccid=4 00:22:20.306 [2024-07-13 06:17:26.783420] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x599170) on tqpair(0x518e10): expected_datao=0, payload_size=1024 00:22:20.306 [2024-07-13 06:17:26.783430] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.783437] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.783446] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.306 [2024-07-13 06:17:26.783454] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.306 [2024-07-13 06:17:26.783461] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.306 [2024-07-13 06:17:26.783467] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5992d0) on tqpair=0x518e10 00:22:20.571 [2024-07-13 06:17:26.828876] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.571 [2024-07-13 06:17:26.828896] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.571 [2024-07-13 06:17:26.828904] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.571 [2024-07-13 06:17:26.828910] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x599170) on tqpair=0x518e10 00:22:20.571 [2024-07-13 06:17:26.828929] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.571 [2024-07-13 06:17:26.828938] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.571 [2024-07-13 06:17:26.828945] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x518e10) 00:22:20.571 [2024-07-13 06:17:26.828956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.571 [2024-07-13 06:17:26.828986] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x599170, cid 4, qid 0 00:22:20.571 [2024-07-13 06:17:26.829167] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.571 [2024-07-13 06:17:26.829183] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.571 [2024-07-13 06:17:26.829190] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.571 [2024-07-13 06:17:26.829196] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x518e10): datao=0, datal=3072, cccid=4 00:22:20.571 [2024-07-13 06:17:26.829204] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x599170) on tqpair(0x518e10): expected_datao=0, payload_size=3072 00:22:20.571 [2024-07-13 06:17:26.829214] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.571 [2024-07-13 06:17:26.829222] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.571 [2024-07-13 06:17:26.829240] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.571 [2024-07-13 06:17:26.829251] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.571 [2024-07-13 06:17:26.829257] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.571 [2024-07-13 06:17:26.829264] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x599170) on tqpair=0x518e10 00:22:20.571 [2024-07-13 06:17:26.829279] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.571 [2024-07-13 06:17:26.829287] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.571 [2024-07-13 06:17:26.829293] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x518e10) 00:22:20.571 [2024-07-13 06:17:26.829304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.571 [2024-07-13 06:17:26.829330] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x599170, cid 4, qid 0 00:22:20.571 [2024-07-13 06:17:26.829459] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.571 [2024-07-13 06:17:26.829474] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.571 [2024-07-13 06:17:26.829481] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.571 [2024-07-13 06:17:26.829487] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x518e10): datao=0, datal=8, cccid=4 00:22:20.571 [2024-07-13 06:17:26.829494] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x599170) on tqpair(0x518e10): expected_datao=0, payload_size=8 00:22:20.571 [2024-07-13 06:17:26.829504] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.571 [2024-07-13 06:17:26.829511] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.571 [2024-07-13 06:17:26.870991] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.571 [2024-07-13 06:17:26.871010] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.571 [2024-07-13 06:17:26.871017] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.571 [2024-07-13 06:17:26.871024] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x599170) on tqpair=0x518e10 00:22:20.571 ===================================================== 00:22:20.571 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:20.571 ===================================================== 00:22:20.571 Controller Capabilities/Features 00:22:20.571 ================================ 00:22:20.571 Vendor ID: 0000 00:22:20.571 Subsystem Vendor ID: 0000 00:22:20.571 Serial Number: .................... 00:22:20.571 Model Number: ........................................ 00:22:20.571 Firmware Version: 24.01.1 00:22:20.571 Recommended Arb Burst: 0 00:22:20.571 IEEE OUI Identifier: 00 00 00 00:22:20.571 Multi-path I/O 00:22:20.571 May have multiple subsystem ports: No 00:22:20.571 May have multiple controllers: No 00:22:20.571 Associated with SR-IOV VF: No 00:22:20.571 Max Data Transfer Size: 131072 00:22:20.571 Max Number of Namespaces: 0 00:22:20.571 Max Number of I/O Queues: 1024 00:22:20.571 NVMe Specification Version (VS): 1.3 00:22:20.571 NVMe Specification Version (Identify): 1.3 00:22:20.571 Maximum Queue Entries: 128 00:22:20.571 Contiguous Queues Required: Yes 00:22:20.571 Arbitration Mechanisms Supported 00:22:20.571 Weighted Round Robin: Not Supported 00:22:20.571 Vendor Specific: Not Supported 00:22:20.571 Reset Timeout: 15000 ms 00:22:20.571 Doorbell Stride: 4 bytes 00:22:20.571 NVM Subsystem Reset: Not Supported 00:22:20.571 Command Sets Supported 00:22:20.571 NVM Command Set: Supported 00:22:20.571 Boot Partition: Not Supported 00:22:20.571 Memory Page Size Minimum: 4096 bytes 00:22:20.571 Memory Page Size Maximum: 4096 bytes 00:22:20.571 Persistent Memory Region: Not Supported 00:22:20.571 Optional Asynchronous Events Supported 00:22:20.571 Namespace Attribute Notices: Not Supported 00:22:20.571 Firmware Activation Notices: Not Supported 00:22:20.571 ANA Change Notices: Not Supported 00:22:20.571 PLE Aggregate Log Change Notices: Not Supported 00:22:20.571 LBA Status Info Alert Notices: Not Supported 00:22:20.571 EGE Aggregate Log Change Notices: Not Supported 00:22:20.571 Normal NVM Subsystem Shutdown event: Not Supported 00:22:20.571 Zone Descriptor Change Notices: Not Supported 00:22:20.571 Discovery Log Change Notices: Supported 00:22:20.571 Controller Attributes 00:22:20.571 128-bit Host Identifier: Not Supported 00:22:20.571 Non-Operational Permissive Mode: Not Supported 00:22:20.571 NVM Sets: Not Supported 00:22:20.571 Read Recovery Levels: Not Supported 00:22:20.571 Endurance Groups: Not Supported 00:22:20.571 Predictable Latency Mode: Not Supported 00:22:20.571 Traffic Based Keep ALive: Not Supported 00:22:20.571 Namespace Granularity: Not Supported 00:22:20.571 SQ Associations: Not Supported 00:22:20.571 UUID List: Not Supported 00:22:20.571 Multi-Domain Subsystem: Not Supported 00:22:20.571 Fixed Capacity Management: Not Supported 00:22:20.571 Variable Capacity Management: Not Supported 00:22:20.571 Delete Endurance Group: Not Supported 00:22:20.571 Delete NVM Set: Not Supported 00:22:20.571 Extended LBA Formats Supported: Not Supported 00:22:20.571 Flexible Data Placement Supported: Not Supported 00:22:20.571 00:22:20.571 Controller Memory Buffer Support 00:22:20.571 ================================ 00:22:20.571 Supported: No 00:22:20.571 00:22:20.571 Persistent Memory Region Support 00:22:20.571 ================================ 00:22:20.571 Supported: No 00:22:20.571 00:22:20.571 Admin Command Set Attributes 00:22:20.571 ============================ 00:22:20.571 Security Send/Receive: Not Supported 00:22:20.571 Format NVM: Not Supported 00:22:20.571 Firmware Activate/Download: Not Supported 00:22:20.571 Namespace Management: Not Supported 00:22:20.571 Device Self-Test: Not Supported 00:22:20.571 Directives: Not Supported 00:22:20.571 NVMe-MI: Not Supported 00:22:20.571 Virtualization Management: Not Supported 00:22:20.571 Doorbell Buffer Config: Not Supported 00:22:20.571 Get LBA Status Capability: Not Supported 00:22:20.571 Command & Feature Lockdown Capability: Not Supported 00:22:20.571 Abort Command Limit: 1 00:22:20.571 Async Event Request Limit: 4 00:22:20.571 Number of Firmware Slots: N/A 00:22:20.571 Firmware Slot 1 Read-Only: N/A 00:22:20.571 Firmware Activation Without Reset: N/A 00:22:20.571 Multiple Update Detection Support: N/A 00:22:20.571 Firmware Update Granularity: No Information Provided 00:22:20.571 Per-Namespace SMART Log: No 00:22:20.571 Asymmetric Namespace Access Log Page: Not Supported 00:22:20.571 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:20.571 Command Effects Log Page: Not Supported 00:22:20.571 Get Log Page Extended Data: Supported 00:22:20.571 Telemetry Log Pages: Not Supported 00:22:20.571 Persistent Event Log Pages: Not Supported 00:22:20.571 Supported Log Pages Log Page: May Support 00:22:20.571 Commands Supported & Effects Log Page: Not Supported 00:22:20.571 Feature Identifiers & Effects Log Page:May Support 00:22:20.571 NVMe-MI Commands & Effects Log Page: May Support 00:22:20.571 Data Area 4 for Telemetry Log: Not Supported 00:22:20.571 Error Log Page Entries Supported: 128 00:22:20.571 Keep Alive: Not Supported 00:22:20.571 00:22:20.571 NVM Command Set Attributes 00:22:20.571 ========================== 00:22:20.571 Submission Queue Entry Size 00:22:20.571 Max: 1 00:22:20.571 Min: 1 00:22:20.571 Completion Queue Entry Size 00:22:20.571 Max: 1 00:22:20.572 Min: 1 00:22:20.572 Number of Namespaces: 0 00:22:20.572 Compare Command: Not Supported 00:22:20.572 Write Uncorrectable Command: Not Supported 00:22:20.572 Dataset Management Command: Not Supported 00:22:20.572 Write Zeroes Command: Not Supported 00:22:20.572 Set Features Save Field: Not Supported 00:22:20.572 Reservations: Not Supported 00:22:20.572 Timestamp: Not Supported 00:22:20.572 Copy: Not Supported 00:22:20.572 Volatile Write Cache: Not Present 00:22:20.572 Atomic Write Unit (Normal): 1 00:22:20.572 Atomic Write Unit (PFail): 1 00:22:20.572 Atomic Compare & Write Unit: 1 00:22:20.572 Fused Compare & Write: Supported 00:22:20.572 Scatter-Gather List 00:22:20.572 SGL Command Set: Supported 00:22:20.572 SGL Keyed: Supported 00:22:20.572 SGL Bit Bucket Descriptor: Not Supported 00:22:20.572 SGL Metadata Pointer: Not Supported 00:22:20.572 Oversized SGL: Not Supported 00:22:20.572 SGL Metadata Address: Not Supported 00:22:20.572 SGL Offset: Supported 00:22:20.572 Transport SGL Data Block: Not Supported 00:22:20.572 Replay Protected Memory Block: Not Supported 00:22:20.572 00:22:20.572 Firmware Slot Information 00:22:20.572 ========================= 00:22:20.572 Active slot: 0 00:22:20.572 00:22:20.572 00:22:20.572 Error Log 00:22:20.572 ========= 00:22:20.572 00:22:20.572 Active Namespaces 00:22:20.572 ================= 00:22:20.572 Discovery Log Page 00:22:20.572 ================== 00:22:20.572 Generation Counter: 2 00:22:20.572 Number of Records: 2 00:22:20.572 Record Format: 0 00:22:20.572 00:22:20.572 Discovery Log Entry 0 00:22:20.572 ---------------------- 00:22:20.572 Transport Type: 3 (TCP) 00:22:20.572 Address Family: 1 (IPv4) 00:22:20.572 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:20.572 Entry Flags: 00:22:20.572 Duplicate Returned Information: 1 00:22:20.572 Explicit Persistent Connection Support for Discovery: 1 00:22:20.572 Transport Requirements: 00:22:20.572 Secure Channel: Not Required 00:22:20.572 Port ID: 0 (0x0000) 00:22:20.572 Controller ID: 65535 (0xffff) 00:22:20.572 Admin Max SQ Size: 128 00:22:20.572 Transport Service Identifier: 4420 00:22:20.572 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:20.572 Transport Address: 10.0.0.2 00:22:20.572 Discovery Log Entry 1 00:22:20.572 ---------------------- 00:22:20.572 Transport Type: 3 (TCP) 00:22:20.572 Address Family: 1 (IPv4) 00:22:20.572 Subsystem Type: 2 (NVM Subsystem) 00:22:20.572 Entry Flags: 00:22:20.572 Duplicate Returned Information: 0 00:22:20.572 Explicit Persistent Connection Support for Discovery: 0 00:22:20.572 Transport Requirements: 00:22:20.572 Secure Channel: Not Required 00:22:20.572 Port ID: 0 (0x0000) 00:22:20.572 Controller ID: 65535 (0xffff) 00:22:20.572 Admin Max SQ Size: 128 00:22:20.572 Transport Service Identifier: 4420 00:22:20.572 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:20.572 Transport Address: 10.0.0.2 [2024-07-13 06:17:26.871138] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:20.572 [2024-07-13 06:17:26.871162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.572 [2024-07-13 06:17:26.871175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.572 [2024-07-13 06:17:26.871185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.572 [2024-07-13 06:17:26.871194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.572 [2024-07-13 06:17:26.871208] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.871230] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.871237] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x518e10) 00:22:20.572 [2024-07-13 06:17:26.871248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.572 [2024-07-13 06:17:26.871276] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x599010, cid 3, qid 0 00:22:20.572 [2024-07-13 06:17:26.871399] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.572 [2024-07-13 06:17:26.871414] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.572 [2024-07-13 06:17:26.871421] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.871427] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x599010) on tqpair=0x518e10 00:22:20.572 [2024-07-13 06:17:26.871438] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.871446] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.871452] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x518e10) 00:22:20.572 [2024-07-13 06:17:26.871463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.572 [2024-07-13 06:17:26.871488] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x599010, cid 3, qid 0 00:22:20.572 [2024-07-13 06:17:26.871630] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.572 [2024-07-13 06:17:26.871642] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.572 [2024-07-13 06:17:26.871648] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.871655] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x599010) on tqpair=0x518e10 00:22:20.572 [2024-07-13 06:17:26.871663] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:20.572 [2024-07-13 06:17:26.871671] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:20.572 [2024-07-13 06:17:26.871686] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.871695] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.871702] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x518e10) 00:22:20.572 [2024-07-13 06:17:26.871712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.572 [2024-07-13 06:17:26.871731] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x599010, cid 3, qid 0 00:22:20.572 [2024-07-13 06:17:26.871860] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.572 [2024-07-13 06:17:26.871898] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.572 [2024-07-13 06:17:26.871905] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.871912] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x599010) on tqpair=0x518e10 00:22:20.572 [2024-07-13 06:17:26.871930] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.871940] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.871947] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x518e10) 00:22:20.572 [2024-07-13 06:17:26.871957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.572 [2024-07-13 06:17:26.871979] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x599010, cid 3, qid 0 00:22:20.572 [2024-07-13 06:17:26.872101] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.572 [2024-07-13 06:17:26.872116] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.572 [2024-07-13 06:17:26.872123] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.872130] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x599010) on tqpair=0x518e10 00:22:20.572 [2024-07-13 06:17:26.872146] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.872156] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.872178] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x518e10) 00:22:20.572 [2024-07-13 06:17:26.872205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.572 [2024-07-13 06:17:26.872226] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x599010, cid 3, qid 0 00:22:20.572 [2024-07-13 06:17:26.872335] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.572 [2024-07-13 06:17:26.872347] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.572 [2024-07-13 06:17:26.872353] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.872360] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x599010) on tqpair=0x518e10 00:22:20.572 [2024-07-13 06:17:26.872375] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.872384] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.872390] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x518e10) 00:22:20.572 [2024-07-13 06:17:26.872400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.572 [2024-07-13 06:17:26.872420] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x599010, cid 3, qid 0 00:22:20.572 [2024-07-13 06:17:26.872541] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.572 [2024-07-13 06:17:26.872556] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.572 [2024-07-13 06:17:26.872563] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.872569] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x599010) on tqpair=0x518e10 00:22:20.572 [2024-07-13 06:17:26.872585] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.872594] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.872601] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x518e10) 00:22:20.572 [2024-07-13 06:17:26.872611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.572 [2024-07-13 06:17:26.872631] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x599010, cid 3, qid 0 00:22:20.572 [2024-07-13 06:17:26.872739] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.572 [2024-07-13 06:17:26.872751] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.572 [2024-07-13 06:17:26.872758] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.572 [2024-07-13 06:17:26.872764] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x599010) on tqpair=0x518e10 00:22:20.573 [2024-07-13 06:17:26.872779] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.872788] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.872795] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x518e10) 00:22:20.573 [2024-07-13 06:17:26.872805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.573 [2024-07-13 06:17:26.872824] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x599010, cid 3, qid 0 00:22:20.573 [2024-07-13 06:17:26.876885] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.573 [2024-07-13 06:17:26.876901] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.573 [2024-07-13 06:17:26.876908] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.876915] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x599010) on tqpair=0x518e10 00:22:20.573 [2024-07-13 06:17:26.876931] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.876940] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.876946] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x518e10) 00:22:20.573 [2024-07-13 06:17:26.876961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.573 [2024-07-13 06:17:26.876983] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x599010, cid 3, qid 0 00:22:20.573 [2024-07-13 06:17:26.877122] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.573 [2024-07-13 06:17:26.877134] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.573 [2024-07-13 06:17:26.877156] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.877163] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x599010) on tqpair=0x518e10 00:22:20.573 [2024-07-13 06:17:26.877176] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:22:20.573 00:22:20.573 06:17:26 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:20.573 [2024-07-13 06:17:26.908714] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:20.573 [2024-07-13 06:17:26.908762] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190284 ] 00:22:20.573 EAL: No free 2048 kB hugepages reported on node 1 00:22:20.573 [2024-07-13 06:17:26.942625] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:20.573 [2024-07-13 06:17:26.942672] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:20.573 [2024-07-13 06:17:26.942681] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:20.573 [2024-07-13 06:17:26.942698] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:20.573 [2024-07-13 06:17:26.942709] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:20.573 [2024-07-13 06:17:26.942934] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:20.573 [2024-07-13 06:17:26.942978] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8d7e10 0 00:22:20.573 [2024-07-13 06:17:26.953896] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:20.573 [2024-07-13 06:17:26.953916] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:20.573 [2024-07-13 06:17:26.953924] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:20.573 [2024-07-13 06:17:26.953930] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:20.573 [2024-07-13 06:17:26.953971] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.953983] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.953990] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8d7e10) 00:22:20.573 [2024-07-13 06:17:26.954004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:20.573 [2024-07-13 06:17:26.954030] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x957bf0, cid 0, qid 0 00:22:20.573 [2024-07-13 06:17:26.963879] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.573 [2024-07-13 06:17:26.963899] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.573 [2024-07-13 06:17:26.963907] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.963914] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x957bf0) on tqpair=0x8d7e10 00:22:20.573 [2024-07-13 06:17:26.963932] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:20.573 [2024-07-13 06:17:26.963950] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:20.573 [2024-07-13 06:17:26.963970] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:20.573 [2024-07-13 06:17:26.963987] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.963996] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.964003] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8d7e10) 00:22:20.573 [2024-07-13 06:17:26.964014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.573 [2024-07-13 06:17:26.964039] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x957bf0, cid 0, qid 0 00:22:20.573 [2024-07-13 06:17:26.964205] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.573 [2024-07-13 06:17:26.964223] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.573 [2024-07-13 06:17:26.964230] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.964237] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x957bf0) on tqpair=0x8d7e10 00:22:20.573 [2024-07-13 06:17:26.964246] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:20.573 [2024-07-13 06:17:26.964261] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:20.573 [2024-07-13 06:17:26.964276] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.964284] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.964291] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8d7e10) 00:22:20.573 [2024-07-13 06:17:26.964302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.573 [2024-07-13 06:17:26.964325] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x957bf0, cid 0, qid 0 00:22:20.573 [2024-07-13 06:17:26.964443] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.573 [2024-07-13 06:17:26.964460] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.573 [2024-07-13 06:17:26.964470] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.964477] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x957bf0) on tqpair=0x8d7e10 00:22:20.573 [2024-07-13 06:17:26.964486] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:20.573 [2024-07-13 06:17:26.964501] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:20.573 [2024-07-13 06:17:26.964516] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.964525] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.964532] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8d7e10) 00:22:20.573 [2024-07-13 06:17:26.964543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.573 [2024-07-13 06:17:26.964565] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x957bf0, cid 0, qid 0 00:22:20.573 [2024-07-13 06:17:26.964698] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.573 [2024-07-13 06:17:26.964716] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.573 [2024-07-13 06:17:26.964723] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.964730] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x957bf0) on tqpair=0x8d7e10 00:22:20.573 [2024-07-13 06:17:26.964739] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:20.573 [2024-07-13 06:17:26.964762] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.964773] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.964780] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8d7e10) 00:22:20.573 [2024-07-13 06:17:26.964791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.573 [2024-07-13 06:17:26.964813] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x957bf0, cid 0, qid 0 00:22:20.573 [2024-07-13 06:17:26.964961] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.573 [2024-07-13 06:17:26.964978] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.573 [2024-07-13 06:17:26.964986] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.964992] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x957bf0) on tqpair=0x8d7e10 00:22:20.573 [2024-07-13 06:17:26.965003] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:20.573 [2024-07-13 06:17:26.965013] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:20.573 [2024-07-13 06:17:26.965027] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:20.573 [2024-07-13 06:17:26.965138] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:20.573 [2024-07-13 06:17:26.965147] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:20.573 [2024-07-13 06:17:26.965174] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.965182] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.965188] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8d7e10) 00:22:20.573 [2024-07-13 06:17:26.965199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.573 [2024-07-13 06:17:26.965235] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x957bf0, cid 0, qid 0 00:22:20.573 [2024-07-13 06:17:26.965449] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.573 [2024-07-13 06:17:26.965465] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.573 [2024-07-13 06:17:26.965472] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.965479] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x957bf0) on tqpair=0x8d7e10 00:22:20.573 [2024-07-13 06:17:26.965488] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:20.573 [2024-07-13 06:17:26.965509] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.965518] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.573 [2024-07-13 06:17:26.965525] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8d7e10) 00:22:20.574 [2024-07-13 06:17:26.965536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.574 [2024-07-13 06:17:26.965561] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x957bf0, cid 0, qid 0 00:22:20.574 [2024-07-13 06:17:26.965674] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.574 [2024-07-13 06:17:26.965693] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.574 [2024-07-13 06:17:26.965700] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.965707] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x957bf0) on tqpair=0x8d7e10 00:22:20.574 [2024-07-13 06:17:26.965715] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:20.574 [2024-07-13 06:17:26.965728] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:20.574 [2024-07-13 06:17:26.965743] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:20.574 [2024-07-13 06:17:26.965760] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:20.574 [2024-07-13 06:17:26.965775] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.965783] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.965789] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8d7e10) 00:22:20.574 [2024-07-13 06:17:26.965800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.574 [2024-07-13 06:17:26.965822] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x957bf0, cid 0, qid 0 00:22:20.574 [2024-07-13 06:17:26.966012] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.574 [2024-07-13 06:17:26.966029] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.574 [2024-07-13 06:17:26.966041] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.966052] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8d7e10): datao=0, datal=4096, cccid=0 00:22:20.574 [2024-07-13 06:17:26.966064] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x957bf0) on tqpair(0x8d7e10): expected_datao=0, payload_size=4096 00:22:20.574 [2024-07-13 06:17:26.966100] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.966114] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.966201] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.574 [2024-07-13 06:17:26.966217] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.574 [2024-07-13 06:17:26.966224] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.966230] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x957bf0) on tqpair=0x8d7e10 00:22:20.574 [2024-07-13 06:17:26.966242] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:20.574 [2024-07-13 06:17:26.966250] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:20.574 [2024-07-13 06:17:26.966257] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:20.574 [2024-07-13 06:17:26.966264] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:20.574 [2024-07-13 06:17:26.966271] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:20.574 [2024-07-13 06:17:26.966279] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:20.574 [2024-07-13 06:17:26.966299] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:20.574 [2024-07-13 06:17:26.966314] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.966322] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.966328] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8d7e10) 00:22:20.574 [2024-07-13 06:17:26.966339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:20.574 [2024-07-13 06:17:26.966361] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x957bf0, cid 0, qid 0 00:22:20.574 [2024-07-13 06:17:26.966526] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.574 [2024-07-13 06:17:26.966547] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.574 [2024-07-13 06:17:26.966555] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.966562] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x957bf0) on tqpair=0x8d7e10 00:22:20.574 [2024-07-13 06:17:26.966573] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.966581] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.966588] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8d7e10) 00:22:20.574 [2024-07-13 06:17:26.966598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.574 [2024-07-13 06:17:26.966609] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.966616] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.966622] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8d7e10) 00:22:20.574 [2024-07-13 06:17:26.966631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.574 [2024-07-13 06:17:26.966641] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.966648] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.966654] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8d7e10) 00:22:20.574 [2024-07-13 06:17:26.966663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.574 [2024-07-13 06:17:26.966691] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.966698] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.966704] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8d7e10) 00:22:20.574 [2024-07-13 06:17:26.966713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.574 [2024-07-13 06:17:26.966722] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:20.574 [2024-07-13 06:17:26.966756] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:20.574 [2024-07-13 06:17:26.966769] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.966776] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.966782] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8d7e10) 00:22:20.574 [2024-07-13 06:17:26.966792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.574 [2024-07-13 06:17:26.966814] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x957bf0, cid 0, qid 0 00:22:20.574 [2024-07-13 06:17:26.966839] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x957d50, cid 1, qid 0 00:22:20.574 [2024-07-13 06:17:26.966847] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x957eb0, cid 2, qid 0 00:22:20.574 [2024-07-13 06:17:26.966855] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958010, cid 3, qid 0 00:22:20.574 [2024-07-13 06:17:26.966862] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958170, cid 4, qid 0 00:22:20.574 [2024-07-13 06:17:26.967099] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.574 [2024-07-13 06:17:26.967118] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.574 [2024-07-13 06:17:26.967125] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.967132] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x958170) on tqpair=0x8d7e10 00:22:20.574 [2024-07-13 06:17:26.967144] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:20.574 [2024-07-13 06:17:26.967154] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:20.574 [2024-07-13 06:17:26.967169] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:20.574 [2024-07-13 06:17:26.967187] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:20.574 [2024-07-13 06:17:26.967213] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.967221] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.967228] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8d7e10) 00:22:20.574 [2024-07-13 06:17:26.967238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:20.574 [2024-07-13 06:17:26.967259] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958170, cid 4, qid 0 00:22:20.574 [2024-07-13 06:17:26.967412] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.574 [2024-07-13 06:17:26.967430] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.574 [2024-07-13 06:17:26.967438] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.967445] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x958170) on tqpair=0x8d7e10 00:22:20.574 [2024-07-13 06:17:26.967510] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:20.574 [2024-07-13 06:17:26.967531] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:20.574 [2024-07-13 06:17:26.967547] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.967555] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.967562] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8d7e10) 00:22:20.574 [2024-07-13 06:17:26.967587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.574 [2024-07-13 06:17:26.967608] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958170, cid 4, qid 0 00:22:20.574 [2024-07-13 06:17:26.967796] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.574 [2024-07-13 06:17:26.967813] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.574 [2024-07-13 06:17:26.967820] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.967831] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8d7e10): datao=0, datal=4096, cccid=4 00:22:20.574 [2024-07-13 06:17:26.967843] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x958170) on tqpair(0x8d7e10): expected_datao=0, payload_size=4096 00:22:20.574 [2024-07-13 06:17:26.967860] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.971896] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.971912] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.574 [2024-07-13 06:17:26.971922] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.574 [2024-07-13 06:17:26.971928] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.574 [2024-07-13 06:17:26.971935] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x958170) on tqpair=0x8d7e10 00:22:20.574 [2024-07-13 06:17:26.971955] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:20.574 [2024-07-13 06:17:26.971974] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:20.575 [2024-07-13 06:17:26.972007] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:20.575 [2024-07-13 06:17:26.972024] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:26.972032] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:26.972039] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8d7e10) 00:22:20.575 [2024-07-13 06:17:26.972049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.575 [2024-07-13 06:17:26.972072] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958170, cid 4, qid 0 00:22:20.575 [2024-07-13 06:17:26.972260] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.575 [2024-07-13 06:17:26.972281] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.575 [2024-07-13 06:17:26.972294] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:26.972304] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8d7e10): datao=0, datal=4096, cccid=4 00:22:20.575 [2024-07-13 06:17:26.972318] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x958170) on tqpair(0x8d7e10): expected_datao=0, payload_size=4096 00:22:20.575 [2024-07-13 06:17:26.972340] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:26.972350] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.013005] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.575 [2024-07-13 06:17:27.013027] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.575 [2024-07-13 06:17:27.013035] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.013042] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x958170) on tqpair=0x8d7e10 00:22:20.575 [2024-07-13 06:17:27.013067] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:20.575 [2024-07-13 06:17:27.013089] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:20.575 [2024-07-13 06:17:27.013106] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.013114] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.013121] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8d7e10) 00:22:20.575 [2024-07-13 06:17:27.013132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.575 [2024-07-13 06:17:27.013156] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958170, cid 4, qid 0 00:22:20.575 [2024-07-13 06:17:27.013278] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.575 [2024-07-13 06:17:27.013299] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.575 [2024-07-13 06:17:27.013312] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.013322] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8d7e10): datao=0, datal=4096, cccid=4 00:22:20.575 [2024-07-13 06:17:27.013335] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x958170) on tqpair(0x8d7e10): expected_datao=0, payload_size=4096 00:22:20.575 [2024-07-13 06:17:27.013361] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.013371] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.013444] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.575 [2024-07-13 06:17:27.013460] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.575 [2024-07-13 06:17:27.013467] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.013474] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x958170) on tqpair=0x8d7e10 00:22:20.575 [2024-07-13 06:17:27.013491] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:20.575 [2024-07-13 06:17:27.013509] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:20.575 [2024-07-13 06:17:27.013527] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:20.575 [2024-07-13 06:17:27.013538] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:20.575 [2024-07-13 06:17:27.013547] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:20.575 [2024-07-13 06:17:27.013556] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:20.575 [2024-07-13 06:17:27.013564] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:20.575 [2024-07-13 06:17:27.013572] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:20.575 [2024-07-13 06:17:27.013592] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.013601] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.013608] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8d7e10) 00:22:20.575 [2024-07-13 06:17:27.013634] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.575 [2024-07-13 06:17:27.013646] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.013653] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.013659] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8d7e10) 00:22:20.575 [2024-07-13 06:17:27.013668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:20.575 [2024-07-13 06:17:27.013708] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958170, cid 4, qid 0 00:22:20.575 [2024-07-13 06:17:27.013720] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9582d0, cid 5, qid 0 00:22:20.575 [2024-07-13 06:17:27.017880] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.575 [2024-07-13 06:17:27.017898] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.575 [2024-07-13 06:17:27.017905] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.017912] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x958170) on tqpair=0x8d7e10 00:22:20.575 [2024-07-13 06:17:27.017923] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.575 [2024-07-13 06:17:27.017933] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.575 [2024-07-13 06:17:27.017939] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.017946] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9582d0) on tqpair=0x8d7e10 00:22:20.575 [2024-07-13 06:17:27.017964] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.017975] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.017982] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8d7e10) 00:22:20.575 [2024-07-13 06:17:27.017993] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.575 [2024-07-13 06:17:27.018016] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9582d0, cid 5, qid 0 00:22:20.575 [2024-07-13 06:17:27.018196] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.575 [2024-07-13 06:17:27.018212] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.575 [2024-07-13 06:17:27.018223] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.018231] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9582d0) on tqpair=0x8d7e10 00:22:20.575 [2024-07-13 06:17:27.018249] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.018260] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.018267] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8d7e10) 00:22:20.575 [2024-07-13 06:17:27.018277] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.575 [2024-07-13 06:17:27.018299] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9582d0, cid 5, qid 0 00:22:20.575 [2024-07-13 06:17:27.018477] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.575 [2024-07-13 06:17:27.018493] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.575 [2024-07-13 06:17:27.018500] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.018507] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9582d0) on tqpair=0x8d7e10 00:22:20.575 [2024-07-13 06:17:27.018526] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.018536] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.018543] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8d7e10) 00:22:20.575 [2024-07-13 06:17:27.018553] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.575 [2024-07-13 06:17:27.018575] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9582d0, cid 5, qid 0 00:22:20.575 [2024-07-13 06:17:27.018753] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.575 [2024-07-13 06:17:27.018769] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.575 [2024-07-13 06:17:27.018776] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.575 [2024-07-13 06:17:27.018783] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9582d0) on tqpair=0x8d7e10 00:22:20.576 [2024-07-13 06:17:27.018805] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.018816] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.018823] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8d7e10) 00:22:20.576 [2024-07-13 06:17:27.018834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.576 [2024-07-13 06:17:27.018846] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.018854] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.018860] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8d7e10) 00:22:20.576 [2024-07-13 06:17:27.018883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.576 [2024-07-13 06:17:27.018897] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.018905] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.018912] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x8d7e10) 00:22:20.576 [2024-07-13 06:17:27.018921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.576 [2024-07-13 06:17:27.018933] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.018941] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.018947] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8d7e10) 00:22:20.576 [2024-07-13 06:17:27.018961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.576 [2024-07-13 06:17:27.018985] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9582d0, cid 5, qid 0 00:22:20.576 [2024-07-13 06:17:27.018997] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958170, cid 4, qid 0 00:22:20.576 [2024-07-13 06:17:27.019005] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958430, cid 6, qid 0 00:22:20.576 [2024-07-13 06:17:27.019013] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958590, cid 7, qid 0 00:22:20.576 [2024-07-13 06:17:27.019292] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.576 [2024-07-13 06:17:27.019309] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.576 [2024-07-13 06:17:27.019316] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.019323] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8d7e10): datao=0, datal=8192, cccid=5 00:22:20.576 [2024-07-13 06:17:27.019331] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9582d0) on tqpair(0x8d7e10): expected_datao=0, payload_size=8192 00:22:20.576 [2024-07-13 06:17:27.019342] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.019351] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.019359] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.576 [2024-07-13 06:17:27.019368] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.576 [2024-07-13 06:17:27.019375] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.019381] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8d7e10): datao=0, datal=512, cccid=4 00:22:20.576 [2024-07-13 06:17:27.019389] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x958170) on tqpair(0x8d7e10): expected_datao=0, payload_size=512 00:22:20.576 [2024-07-13 06:17:27.019399] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.019406] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.019416] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.576 [2024-07-13 06:17:27.019431] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.576 [2024-07-13 06:17:27.019443] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.019453] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8d7e10): datao=0, datal=512, cccid=6 00:22:20.576 [2024-07-13 06:17:27.019464] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x958430) on tqpair(0x8d7e10): expected_datao=0, payload_size=512 00:22:20.576 [2024-07-13 06:17:27.019481] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.019491] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.019500] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:20.576 [2024-07-13 06:17:27.019509] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:20.576 [2024-07-13 06:17:27.019516] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.019522] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8d7e10): datao=0, datal=4096, cccid=7 00:22:20.576 [2024-07-13 06:17:27.019529] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x958590) on tqpair(0x8d7e10): expected_datao=0, payload_size=4096 00:22:20.576 [2024-07-13 06:17:27.019540] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.019548] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.019560] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.576 [2024-07-13 06:17:27.019569] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.576 [2024-07-13 06:17:27.019576] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.019586] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x9582d0) on tqpair=0x8d7e10 00:22:20.576 [2024-07-13 06:17:27.019607] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.576 [2024-07-13 06:17:27.019618] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.576 [2024-07-13 06:17:27.019625] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.019632] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x958170) on tqpair=0x8d7e10 00:22:20.576 [2024-07-13 06:17:27.019646] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.576 [2024-07-13 06:17:27.019656] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.576 [2024-07-13 06:17:27.019663] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.019670] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x958430) on tqpair=0x8d7e10 00:22:20.576 [2024-07-13 06:17:27.019680] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.576 [2024-07-13 06:17:27.019690] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.576 [2024-07-13 06:17:27.019696] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.576 [2024-07-13 06:17:27.019703] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x958590) on tqpair=0x8d7e10 00:22:20.576 ===================================================== 00:22:20.576 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:20.576 ===================================================== 00:22:20.576 Controller Capabilities/Features 00:22:20.576 ================================ 00:22:20.576 Vendor ID: 8086 00:22:20.576 Subsystem Vendor ID: 8086 00:22:20.576 Serial Number: SPDK00000000000001 00:22:20.576 Model Number: SPDK bdev Controller 00:22:20.576 Firmware Version: 24.01.1 00:22:20.576 Recommended Arb Burst: 6 00:22:20.576 IEEE OUI Identifier: e4 d2 5c 00:22:20.576 Multi-path I/O 00:22:20.576 May have multiple subsystem ports: Yes 00:22:20.576 May have multiple controllers: Yes 00:22:20.576 Associated with SR-IOV VF: No 00:22:20.576 Max Data Transfer Size: 131072 00:22:20.576 Max Number of Namespaces: 32 00:22:20.576 Max Number of I/O Queues: 127 00:22:20.576 NVMe Specification Version (VS): 1.3 00:22:20.576 NVMe Specification Version (Identify): 1.3 00:22:20.576 Maximum Queue Entries: 128 00:22:20.576 Contiguous Queues Required: Yes 00:22:20.576 Arbitration Mechanisms Supported 00:22:20.576 Weighted Round Robin: Not Supported 00:22:20.576 Vendor Specific: Not Supported 00:22:20.576 Reset Timeout: 15000 ms 00:22:20.576 Doorbell Stride: 4 bytes 00:22:20.576 NVM Subsystem Reset: Not Supported 00:22:20.576 Command Sets Supported 00:22:20.576 NVM Command Set: Supported 00:22:20.576 Boot Partition: Not Supported 00:22:20.576 Memory Page Size Minimum: 4096 bytes 00:22:20.576 Memory Page Size Maximum: 4096 bytes 00:22:20.576 Persistent Memory Region: Not Supported 00:22:20.576 Optional Asynchronous Events Supported 00:22:20.576 Namespace Attribute Notices: Supported 00:22:20.576 Firmware Activation Notices: Not Supported 00:22:20.576 ANA Change Notices: Not Supported 00:22:20.576 PLE Aggregate Log Change Notices: Not Supported 00:22:20.576 LBA Status Info Alert Notices: Not Supported 00:22:20.576 EGE Aggregate Log Change Notices: Not Supported 00:22:20.576 Normal NVM Subsystem Shutdown event: Not Supported 00:22:20.576 Zone Descriptor Change Notices: Not Supported 00:22:20.576 Discovery Log Change Notices: Not Supported 00:22:20.576 Controller Attributes 00:22:20.576 128-bit Host Identifier: Supported 00:22:20.576 Non-Operational Permissive Mode: Not Supported 00:22:20.576 NVM Sets: Not Supported 00:22:20.576 Read Recovery Levels: Not Supported 00:22:20.576 Endurance Groups: Not Supported 00:22:20.576 Predictable Latency Mode: Not Supported 00:22:20.576 Traffic Based Keep ALive: Not Supported 00:22:20.576 Namespace Granularity: Not Supported 00:22:20.576 SQ Associations: Not Supported 00:22:20.576 UUID List: Not Supported 00:22:20.576 Multi-Domain Subsystem: Not Supported 00:22:20.576 Fixed Capacity Management: Not Supported 00:22:20.576 Variable Capacity Management: Not Supported 00:22:20.576 Delete Endurance Group: Not Supported 00:22:20.576 Delete NVM Set: Not Supported 00:22:20.576 Extended LBA Formats Supported: Not Supported 00:22:20.576 Flexible Data Placement Supported: Not Supported 00:22:20.576 00:22:20.576 Controller Memory Buffer Support 00:22:20.576 ================================ 00:22:20.576 Supported: No 00:22:20.576 00:22:20.576 Persistent Memory Region Support 00:22:20.576 ================================ 00:22:20.576 Supported: No 00:22:20.576 00:22:20.576 Admin Command Set Attributes 00:22:20.576 ============================ 00:22:20.576 Security Send/Receive: Not Supported 00:22:20.576 Format NVM: Not Supported 00:22:20.576 Firmware Activate/Download: Not Supported 00:22:20.576 Namespace Management: Not Supported 00:22:20.576 Device Self-Test: Not Supported 00:22:20.576 Directives: Not Supported 00:22:20.576 NVMe-MI: Not Supported 00:22:20.576 Virtualization Management: Not Supported 00:22:20.576 Doorbell Buffer Config: Not Supported 00:22:20.576 Get LBA Status Capability: Not Supported 00:22:20.576 Command & Feature Lockdown Capability: Not Supported 00:22:20.576 Abort Command Limit: 4 00:22:20.576 Async Event Request Limit: 4 00:22:20.577 Number of Firmware Slots: N/A 00:22:20.577 Firmware Slot 1 Read-Only: N/A 00:22:20.577 Firmware Activation Without Reset: N/A 00:22:20.577 Multiple Update Detection Support: N/A 00:22:20.577 Firmware Update Granularity: No Information Provided 00:22:20.577 Per-Namespace SMART Log: No 00:22:20.577 Asymmetric Namespace Access Log Page: Not Supported 00:22:20.577 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:20.577 Command Effects Log Page: Supported 00:22:20.577 Get Log Page Extended Data: Supported 00:22:20.577 Telemetry Log Pages: Not Supported 00:22:20.577 Persistent Event Log Pages: Not Supported 00:22:20.577 Supported Log Pages Log Page: May Support 00:22:20.577 Commands Supported & Effects Log Page: Not Supported 00:22:20.577 Feature Identifiers & Effects Log Page:May Support 00:22:20.577 NVMe-MI Commands & Effects Log Page: May Support 00:22:20.577 Data Area 4 for Telemetry Log: Not Supported 00:22:20.577 Error Log Page Entries Supported: 128 00:22:20.577 Keep Alive: Supported 00:22:20.577 Keep Alive Granularity: 10000 ms 00:22:20.577 00:22:20.577 NVM Command Set Attributes 00:22:20.577 ========================== 00:22:20.577 Submission Queue Entry Size 00:22:20.577 Max: 64 00:22:20.577 Min: 64 00:22:20.577 Completion Queue Entry Size 00:22:20.577 Max: 16 00:22:20.577 Min: 16 00:22:20.577 Number of Namespaces: 32 00:22:20.577 Compare Command: Supported 00:22:20.577 Write Uncorrectable Command: Not Supported 00:22:20.577 Dataset Management Command: Supported 00:22:20.577 Write Zeroes Command: Supported 00:22:20.577 Set Features Save Field: Not Supported 00:22:20.577 Reservations: Supported 00:22:20.577 Timestamp: Not Supported 00:22:20.577 Copy: Supported 00:22:20.577 Volatile Write Cache: Present 00:22:20.577 Atomic Write Unit (Normal): 1 00:22:20.577 Atomic Write Unit (PFail): 1 00:22:20.577 Atomic Compare & Write Unit: 1 00:22:20.577 Fused Compare & Write: Supported 00:22:20.577 Scatter-Gather List 00:22:20.577 SGL Command Set: Supported 00:22:20.577 SGL Keyed: Supported 00:22:20.577 SGL Bit Bucket Descriptor: Not Supported 00:22:20.577 SGL Metadata Pointer: Not Supported 00:22:20.577 Oversized SGL: Not Supported 00:22:20.577 SGL Metadata Address: Not Supported 00:22:20.577 SGL Offset: Supported 00:22:20.577 Transport SGL Data Block: Not Supported 00:22:20.577 Replay Protected Memory Block: Not Supported 00:22:20.577 00:22:20.577 Firmware Slot Information 00:22:20.577 ========================= 00:22:20.577 Active slot: 1 00:22:20.577 Slot 1 Firmware Revision: 24.01.1 00:22:20.577 00:22:20.577 00:22:20.577 Commands Supported and Effects 00:22:20.577 ============================== 00:22:20.577 Admin Commands 00:22:20.577 -------------- 00:22:20.577 Get Log Page (02h): Supported 00:22:20.577 Identify (06h): Supported 00:22:20.577 Abort (08h): Supported 00:22:20.577 Set Features (09h): Supported 00:22:20.577 Get Features (0Ah): Supported 00:22:20.577 Asynchronous Event Request (0Ch): Supported 00:22:20.577 Keep Alive (18h): Supported 00:22:20.577 I/O Commands 00:22:20.577 ------------ 00:22:20.577 Flush (00h): Supported LBA-Change 00:22:20.577 Write (01h): Supported LBA-Change 00:22:20.577 Read (02h): Supported 00:22:20.577 Compare (05h): Supported 00:22:20.577 Write Zeroes (08h): Supported LBA-Change 00:22:20.577 Dataset Management (09h): Supported LBA-Change 00:22:20.577 Copy (19h): Supported LBA-Change 00:22:20.577 Unknown (79h): Supported LBA-Change 00:22:20.577 Unknown (7Ah): Supported 00:22:20.577 00:22:20.577 Error Log 00:22:20.577 ========= 00:22:20.577 00:22:20.577 Arbitration 00:22:20.577 =========== 00:22:20.577 Arbitration Burst: 1 00:22:20.577 00:22:20.577 Power Management 00:22:20.577 ================ 00:22:20.577 Number of Power States: 1 00:22:20.577 Current Power State: Power State #0 00:22:20.577 Power State #0: 00:22:20.577 Max Power: 0.00 W 00:22:20.577 Non-Operational State: Operational 00:22:20.577 Entry Latency: Not Reported 00:22:20.577 Exit Latency: Not Reported 00:22:20.577 Relative Read Throughput: 0 00:22:20.577 Relative Read Latency: 0 00:22:20.577 Relative Write Throughput: 0 00:22:20.577 Relative Write Latency: 0 00:22:20.577 Idle Power: Not Reported 00:22:20.577 Active Power: Not Reported 00:22:20.577 Non-Operational Permissive Mode: Not Supported 00:22:20.577 00:22:20.577 Health Information 00:22:20.577 ================== 00:22:20.577 Critical Warnings: 00:22:20.577 Available Spare Space: OK 00:22:20.577 Temperature: OK 00:22:20.577 Device Reliability: OK 00:22:20.577 Read Only: No 00:22:20.577 Volatile Memory Backup: OK 00:22:20.577 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:20.577 Temperature Threshold: [2024-07-13 06:17:27.019840] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.577 [2024-07-13 06:17:27.019877] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.577 [2024-07-13 06:17:27.019885] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x8d7e10) 00:22:20.577 [2024-07-13 06:17:27.019896] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.577 [2024-07-13 06:17:27.019919] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958590, cid 7, qid 0 00:22:20.577 [2024-07-13 06:17:27.020100] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.577 [2024-07-13 06:17:27.020116] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.577 [2024-07-13 06:17:27.020123] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.577 [2024-07-13 06:17:27.020130] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x958590) on tqpair=0x8d7e10 00:22:20.577 [2024-07-13 06:17:27.020175] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:20.577 [2024-07-13 06:17:27.020213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.577 [2024-07-13 06:17:27.020227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.577 [2024-07-13 06:17:27.020237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.577 [2024-07-13 06:17:27.020246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:20.577 [2024-07-13 06:17:27.020273] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.577 [2024-07-13 06:17:27.020282] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.577 [2024-07-13 06:17:27.020288] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8d7e10) 00:22:20.577 [2024-07-13 06:17:27.020298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.577 [2024-07-13 06:17:27.020319] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958010, cid 3, qid 0 00:22:20.577 [2024-07-13 06:17:27.020478] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.577 [2024-07-13 06:17:27.020494] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.577 [2024-07-13 06:17:27.020503] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.577 [2024-07-13 06:17:27.020512] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x958010) on tqpair=0x8d7e10 00:22:20.577 [2024-07-13 06:17:27.020528] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.577 [2024-07-13 06:17:27.020537] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.577 [2024-07-13 06:17:27.020543] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8d7e10) 00:22:20.577 [2024-07-13 06:17:27.020554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.577 [2024-07-13 06:17:27.020583] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958010, cid 3, qid 0 00:22:20.577 [2024-07-13 06:17:27.020732] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.577 [2024-07-13 06:17:27.020748] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.577 [2024-07-13 06:17:27.020755] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.577 [2024-07-13 06:17:27.020762] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x958010) on tqpair=0x8d7e10 00:22:20.577 [2024-07-13 06:17:27.020770] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:20.577 [2024-07-13 06:17:27.020778] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:20.577 [2024-07-13 06:17:27.020796] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.577 [2024-07-13 06:17:27.020807] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.577 [2024-07-13 06:17:27.020814] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8d7e10) 00:22:20.577 [2024-07-13 06:17:27.020825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.577 [2024-07-13 06:17:27.020846] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958010, cid 3, qid 0 00:22:20.577 [2024-07-13 06:17:27.021012] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.577 [2024-07-13 06:17:27.021031] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.577 [2024-07-13 06:17:27.021038] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.577 [2024-07-13 06:17:27.021045] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x958010) on tqpair=0x8d7e10 00:22:20.577 [2024-07-13 06:17:27.021063] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.577 [2024-07-13 06:17:27.021075] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.577 [2024-07-13 06:17:27.021082] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8d7e10) 00:22:20.577 [2024-07-13 06:17:27.021093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.577 [2024-07-13 06:17:27.021115] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958010, cid 3, qid 0 00:22:20.577 [2024-07-13 06:17:27.021242] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.577 [2024-07-13 06:17:27.021258] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.577 [2024-07-13 06:17:27.021265] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.577 [2024-07-13 06:17:27.021272] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x958010) on tqpair=0x8d7e10 00:22:20.577 [2024-07-13 06:17:27.021290] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.577 [2024-07-13 06:17:27.021301] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.578 [2024-07-13 06:17:27.021308] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8d7e10) 00:22:20.578 [2024-07-13 06:17:27.021319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.578 [2024-07-13 06:17:27.021340] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958010, cid 3, qid 0 00:22:20.578 [2024-07-13 06:17:27.021481] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.578 [2024-07-13 06:17:27.021497] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.578 [2024-07-13 06:17:27.021509] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.578 [2024-07-13 06:17:27.021519] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x958010) on tqpair=0x8d7e10 00:22:20.578 [2024-07-13 06:17:27.021537] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.578 [2024-07-13 06:17:27.021547] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.578 [2024-07-13 06:17:27.021553] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8d7e10) 00:22:20.578 [2024-07-13 06:17:27.021567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.578 [2024-07-13 06:17:27.021590] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958010, cid 3, qid 0 00:22:20.578 [2024-07-13 06:17:27.021720] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.578 [2024-07-13 06:17:27.021736] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.578 [2024-07-13 06:17:27.021743] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.578 [2024-07-13 06:17:27.021750] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x958010) on tqpair=0x8d7e10 00:22:20.578 [2024-07-13 06:17:27.021768] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.578 [2024-07-13 06:17:27.021779] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.578 [2024-07-13 06:17:27.021786] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8d7e10) 00:22:20.578 [2024-07-13 06:17:27.021797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.578 [2024-07-13 06:17:27.021818] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958010, cid 3, qid 0 00:22:20.578 [2024-07-13 06:17:27.025897] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.578 [2024-07-13 06:17:27.025915] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.578 [2024-07-13 06:17:27.025922] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.578 [2024-07-13 06:17:27.025929] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x958010) on tqpair=0x8d7e10 00:22:20.578 [2024-07-13 06:17:27.025947] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:20.578 [2024-07-13 06:17:27.025958] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:20.578 [2024-07-13 06:17:27.025964] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8d7e10) 00:22:20.578 [2024-07-13 06:17:27.025975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:20.578 [2024-07-13 06:17:27.025996] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x958010, cid 3, qid 0 00:22:20.578 [2024-07-13 06:17:27.026190] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:20.578 [2024-07-13 06:17:27.026206] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:20.578 [2024-07-13 06:17:27.026213] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:20.578 [2024-07-13 06:17:27.026220] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x958010) on tqpair=0x8d7e10 00:22:20.578 [2024-07-13 06:17:27.026234] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:22:20.578 0 Kelvin (-273 Celsius) 00:22:20.578 Available Spare: 0% 00:22:20.578 Available Spare Threshold: 0% 00:22:20.578 Life Percentage Used: 0% 00:22:20.578 Data Units Read: 0 00:22:20.578 Data Units Written: 0 00:22:20.578 Host Read Commands: 0 00:22:20.578 Host Write Commands: 0 00:22:20.578 Controller Busy Time: 0 minutes 00:22:20.578 Power Cycles: 0 00:22:20.578 Power On Hours: 0 hours 00:22:20.578 Unsafe Shutdowns: 0 00:22:20.578 Unrecoverable Media Errors: 0 00:22:20.578 Lifetime Error Log Entries: 0 00:22:20.578 Warning Temperature Time: 0 minutes 00:22:20.578 Critical Temperature Time: 0 minutes 00:22:20.578 00:22:20.578 Number of Queues 00:22:20.578 ================ 00:22:20.578 Number of I/O Submission Queues: 127 00:22:20.578 Number of I/O Completion Queues: 127 00:22:20.578 00:22:20.578 Active Namespaces 00:22:20.578 ================= 00:22:20.578 Namespace ID:1 00:22:20.578 Error Recovery Timeout: Unlimited 00:22:20.578 Command Set Identifier: NVM (00h) 00:22:20.578 Deallocate: Supported 00:22:20.578 Deallocated/Unwritten Error: Not Supported 00:22:20.578 Deallocated Read Value: Unknown 00:22:20.578 Deallocate in Write Zeroes: Not Supported 00:22:20.578 Deallocated Guard Field: 0xFFFF 00:22:20.578 Flush: Supported 00:22:20.578 Reservation: Supported 00:22:20.578 Namespace Sharing Capabilities: Multiple Controllers 00:22:20.578 Size (in LBAs): 131072 (0GiB) 00:22:20.578 Capacity (in LBAs): 131072 (0GiB) 00:22:20.578 Utilization (in LBAs): 131072 (0GiB) 00:22:20.578 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:20.578 EUI64: ABCDEF0123456789 00:22:20.578 UUID: 0c3880c2-d863-411b-a471-fe540f087868 00:22:20.578 Thin Provisioning: Not Supported 00:22:20.578 Per-NS Atomic Units: Yes 00:22:20.578 Atomic Boundary Size (Normal): 0 00:22:20.578 Atomic Boundary Size (PFail): 0 00:22:20.578 Atomic Boundary Offset: 0 00:22:20.578 Maximum Single Source Range Length: 65535 00:22:20.578 Maximum Copy Length: 65535 00:22:20.578 Maximum Source Range Count: 1 00:22:20.578 NGUID/EUI64 Never Reused: No 00:22:20.578 Namespace Write Protected: No 00:22:20.578 Number of LBA Formats: 1 00:22:20.578 Current LBA Format: LBA Format #00 00:22:20.578 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:20.578 00:22:20.578 06:17:27 -- host/identify.sh@51 -- # sync 00:22:20.578 06:17:27 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:20.578 06:17:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:20.578 06:17:27 -- common/autotest_common.sh@10 -- # set +x 00:22:20.578 06:17:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:20.578 06:17:27 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:20.578 06:17:27 -- host/identify.sh@56 -- # nvmftestfini 00:22:20.578 06:17:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:20.578 06:17:27 -- nvmf/common.sh@116 -- # sync 00:22:20.578 06:17:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:20.578 06:17:27 -- nvmf/common.sh@119 -- # set +e 00:22:20.578 06:17:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:20.578 06:17:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:20.578 rmmod nvme_tcp 00:22:20.578 rmmod nvme_fabrics 00:22:20.837 rmmod nvme_keyring 00:22:20.837 06:17:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:20.837 06:17:27 -- nvmf/common.sh@123 -- # set -e 00:22:20.837 06:17:27 -- nvmf/common.sh@124 -- # return 0 00:22:20.837 06:17:27 -- nvmf/common.sh@477 -- # '[' -n 1190122 ']' 00:22:20.837 06:17:27 -- nvmf/common.sh@478 -- # killprocess 1190122 00:22:20.837 06:17:27 -- common/autotest_common.sh@926 -- # '[' -z 1190122 ']' 00:22:20.837 06:17:27 -- common/autotest_common.sh@930 -- # kill -0 1190122 00:22:20.837 06:17:27 -- common/autotest_common.sh@931 -- # uname 00:22:20.837 06:17:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:20.837 06:17:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1190122 00:22:20.837 06:17:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:20.837 06:17:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:20.837 06:17:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1190122' 00:22:20.837 killing process with pid 1190122 00:22:20.837 06:17:27 -- common/autotest_common.sh@945 -- # kill 1190122 00:22:20.837 [2024-07-13 06:17:27.126567] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:20.837 06:17:27 -- common/autotest_common.sh@950 -- # wait 1190122 00:22:21.096 06:17:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:21.096 06:17:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:21.096 06:17:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:21.096 06:17:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:21.096 06:17:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:21.096 06:17:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.096 06:17:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:21.096 06:17:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.002 06:17:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:23.002 00:22:23.002 real 0m5.932s 00:22:23.002 user 0m7.059s 00:22:23.002 sys 0m1.783s 00:22:23.002 06:17:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:23.002 06:17:29 -- common/autotest_common.sh@10 -- # set +x 00:22:23.002 ************************************ 00:22:23.002 END TEST nvmf_identify 00:22:23.002 ************************************ 00:22:23.002 06:17:29 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:23.002 06:17:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:23.002 06:17:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:23.002 06:17:29 -- common/autotest_common.sh@10 -- # set +x 00:22:23.002 ************************************ 00:22:23.002 START TEST nvmf_perf 00:22:23.002 ************************************ 00:22:23.002 06:17:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:23.260 * Looking for test storage... 00:22:23.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:23.260 06:17:29 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.260 06:17:29 -- nvmf/common.sh@7 -- # uname -s 00:22:23.260 06:17:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.260 06:17:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.260 06:17:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.260 06:17:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.260 06:17:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.260 06:17:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.260 06:17:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.260 06:17:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.260 06:17:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.260 06:17:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.260 06:17:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.260 06:17:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.260 06:17:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.260 06:17:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.260 06:17:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.260 06:17:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.260 06:17:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.260 06:17:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.260 06:17:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.261 06:17:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.261 06:17:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.261 06:17:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.261 06:17:29 -- paths/export.sh@5 -- # export PATH 00:22:23.261 06:17:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.261 06:17:29 -- nvmf/common.sh@46 -- # : 0 00:22:23.261 06:17:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:23.261 06:17:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:23.261 06:17:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:23.261 06:17:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.261 06:17:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.261 06:17:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:23.261 06:17:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:23.261 06:17:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:23.261 06:17:29 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:23.261 06:17:29 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:23.261 06:17:29 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:23.261 06:17:29 -- host/perf.sh@17 -- # nvmftestinit 00:22:23.261 06:17:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:23.261 06:17:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.261 06:17:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:23.261 06:17:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:23.261 06:17:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:23.261 06:17:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.261 06:17:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.261 06:17:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.261 06:17:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:23.261 06:17:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:23.261 06:17:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:23.261 06:17:29 -- common/autotest_common.sh@10 -- # set +x 00:22:25.162 06:17:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:25.162 06:17:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:25.162 06:17:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:25.162 06:17:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:25.162 06:17:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:25.162 06:17:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:25.162 06:17:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:25.162 06:17:31 -- nvmf/common.sh@294 -- # net_devs=() 00:22:25.162 06:17:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:25.162 06:17:31 -- nvmf/common.sh@295 -- # e810=() 00:22:25.162 06:17:31 -- nvmf/common.sh@295 -- # local -ga e810 00:22:25.162 06:17:31 -- nvmf/common.sh@296 -- # x722=() 00:22:25.162 06:17:31 -- nvmf/common.sh@296 -- # local -ga x722 00:22:25.162 06:17:31 -- nvmf/common.sh@297 -- # mlx=() 00:22:25.162 06:17:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:25.162 06:17:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.162 06:17:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.162 06:17:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.162 06:17:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.162 06:17:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.162 06:17:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.162 06:17:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.162 06:17:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.162 06:17:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.162 06:17:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.162 06:17:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.162 06:17:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:25.162 06:17:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:25.162 06:17:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:25.162 06:17:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:25.162 06:17:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:25.162 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:25.162 06:17:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:25.162 06:17:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:25.162 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:25.162 06:17:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:25.162 06:17:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:25.162 06:17:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.162 06:17:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:25.162 06:17:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.162 06:17:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:25.162 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:25.162 06:17:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.162 06:17:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:25.162 06:17:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.162 06:17:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:25.162 06:17:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.162 06:17:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:25.162 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:25.162 06:17:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.162 06:17:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:25.162 06:17:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:25.162 06:17:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:25.162 06:17:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.162 06:17:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.162 06:17:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.162 06:17:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:25.162 06:17:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.162 06:17:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.162 06:17:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:25.162 06:17:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.162 06:17:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.162 06:17:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:25.162 06:17:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:25.162 06:17:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.162 06:17:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.162 06:17:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.162 06:17:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.162 06:17:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:25.162 06:17:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.162 06:17:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.162 06:17:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.162 06:17:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:25.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:22:25.162 00:22:25.162 --- 10.0.0.2 ping statistics --- 00:22:25.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.162 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:22:25.162 06:17:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:22:25.162 00:22:25.162 --- 10.0.0.1 ping statistics --- 00:22:25.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.162 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:22:25.162 06:17:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.162 06:17:31 -- nvmf/common.sh@410 -- # return 0 00:22:25.162 06:17:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:25.162 06:17:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.162 06:17:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:25.162 06:17:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.162 06:17:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:25.162 06:17:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:25.162 06:17:31 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:25.162 06:17:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:25.162 06:17:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:25.162 06:17:31 -- common/autotest_common.sh@10 -- # set +x 00:22:25.162 06:17:31 -- nvmf/common.sh@469 -- # nvmfpid=1192228 00:22:25.162 06:17:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:25.162 06:17:31 -- nvmf/common.sh@470 -- # waitforlisten 1192228 00:22:25.162 06:17:31 -- common/autotest_common.sh@819 -- # '[' -z 1192228 ']' 00:22:25.162 06:17:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.162 06:17:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:25.162 06:17:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.162 06:17:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:25.162 06:17:31 -- common/autotest_common.sh@10 -- # set +x 00:22:25.162 [2024-07-13 06:17:31.613682] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:25.162 [2024-07-13 06:17:31.613759] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.162 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.419 [2024-07-13 06:17:31.676713] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:25.419 [2024-07-13 06:17:31.781837] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:25.419 [2024-07-13 06:17:31.782009] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.419 [2024-07-13 06:17:31.782029] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.419 [2024-07-13 06:17:31.782042] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.419 [2024-07-13 06:17:31.782096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.419 [2024-07-13 06:17:31.782124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.419 [2024-07-13 06:17:31.782181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:25.419 [2024-07-13 06:17:31.782184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.352 06:17:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:26.352 06:17:32 -- common/autotest_common.sh@852 -- # return 0 00:22:26.352 06:17:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:26.352 06:17:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:26.352 06:17:32 -- common/autotest_common.sh@10 -- # set +x 00:22:26.352 06:17:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.352 06:17:32 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:26.352 06:17:32 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:29.682 06:17:35 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:29.682 06:17:35 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:29.682 06:17:35 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:22:29.682 06:17:35 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:29.948 06:17:36 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:29.948 06:17:36 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:22:29.948 06:17:36 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:29.948 06:17:36 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:29.948 06:17:36 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:30.206 [2024-07-13 06:17:36.546487] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.206 06:17:36 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:30.464 06:17:36 -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:30.464 06:17:36 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:30.721 06:17:37 -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:30.721 06:17:37 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:30.979 06:17:37 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.979 [2024-07-13 06:17:37.482033] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.239 06:17:37 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:31.239 06:17:37 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:22:31.239 06:17:37 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:31.239 06:17:37 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:31.499 06:17:37 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:32.874 Initializing NVMe Controllers 00:22:32.874 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:22:32.874 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:22:32.874 Initialization complete. Launching workers. 00:22:32.874 ======================================================== 00:22:32.874 Latency(us) 00:22:32.874 Device Information : IOPS MiB/s Average min max 00:22:32.874 PCIE (0000:88:00.0) NSID 1 from core 0: 86094.05 336.30 371.13 36.92 4303.99 00:22:32.874 ======================================================== 00:22:32.874 Total : 86094.05 336.30 371.13 36.92 4303.99 00:22:32.874 00:22:32.874 06:17:38 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:32.874 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.252 Initializing NVMe Controllers 00:22:34.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:34.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:34.252 Initialization complete. Launching workers. 00:22:34.252 ======================================================== 00:22:34.252 Latency(us) 00:22:34.252 Device Information : IOPS MiB/s Average min max 00:22:34.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 102.00 0.40 10123.37 174.15 44947.27 00:22:34.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 58.00 0.23 17628.37 7000.48 50873.76 00:22:34.252 ======================================================== 00:22:34.252 Total : 160.00 0.62 12843.93 174.15 50873.76 00:22:34.252 00:22:34.252 06:17:40 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:34.252 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.628 Initializing NVMe Controllers 00:22:35.628 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:35.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:35.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:35.628 Initialization complete. Launching workers. 00:22:35.628 ======================================================== 00:22:35.628 Latency(us) 00:22:35.628 Device Information : IOPS MiB/s Average min max 00:22:35.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8524.46 33.30 3754.88 557.66 7420.85 00:22:35.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3895.59 15.22 8257.33 5680.95 15996.20 00:22:35.628 ======================================================== 00:22:35.628 Total : 12420.05 48.52 5167.09 557.66 15996.20 00:22:35.628 00:22:35.628 06:17:41 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:35.628 06:17:41 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:35.628 06:17:41 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:35.628 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.164 Initializing NVMe Controllers 00:22:38.164 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:38.164 Controller IO queue size 128, less than required. 00:22:38.164 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:38.164 Controller IO queue size 128, less than required. 00:22:38.164 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:38.164 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:38.164 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:38.164 Initialization complete. Launching workers. 00:22:38.164 ======================================================== 00:22:38.164 Latency(us) 00:22:38.164 Device Information : IOPS MiB/s Average min max 00:22:38.164 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1200.06 300.01 109112.06 71666.41 158681.61 00:22:38.164 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 589.55 147.39 223784.79 71024.50 341375.71 00:22:38.164 ======================================================== 00:22:38.164 Total : 1789.60 447.40 146888.49 71024.50 341375.71 00:22:38.164 00:22:38.164 06:17:44 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:38.164 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.164 No valid NVMe controllers or AIO or URING devices found 00:22:38.164 Initializing NVMe Controllers 00:22:38.164 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:38.164 Controller IO queue size 128, less than required. 00:22:38.164 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:38.164 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:38.164 Controller IO queue size 128, less than required. 00:22:38.164 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:38.164 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:38.164 WARNING: Some requested NVMe devices were skipped 00:22:38.164 06:17:44 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:38.164 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.697 Initializing NVMe Controllers 00:22:40.697 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:40.697 Controller IO queue size 128, less than required. 00:22:40.697 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.697 Controller IO queue size 128, less than required. 00:22:40.697 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:40.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:40.697 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:40.697 Initialization complete. Launching workers. 00:22:40.697 00:22:40.697 ==================== 00:22:40.697 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:40.697 TCP transport: 00:22:40.697 polls: 17788 00:22:40.697 idle_polls: 6184 00:22:40.697 sock_completions: 11604 00:22:40.697 nvme_completions: 5275 00:22:40.697 submitted_requests: 8105 00:22:40.697 queued_requests: 1 00:22:40.697 00:22:40.697 ==================== 00:22:40.697 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:40.697 TCP transport: 00:22:40.697 polls: 18077 00:22:40.697 idle_polls: 6742 00:22:40.697 sock_completions: 11335 00:22:40.697 nvme_completions: 3876 00:22:40.697 submitted_requests: 5952 00:22:40.697 queued_requests: 1 00:22:40.697 ======================================================== 00:22:40.697 Latency(us) 00:22:40.697 Device Information : IOPS MiB/s Average min max 00:22:40.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1382.42 345.61 95592.27 56023.66 143003.96 00:22:40.697 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1032.44 258.11 125660.08 51122.22 176581.77 00:22:40.697 ======================================================== 00:22:40.697 Total : 2414.86 603.72 108447.35 51122.22 176581.77 00:22:40.697 00:22:40.697 06:17:47 -- host/perf.sh@66 -- # sync 00:22:40.697 06:17:47 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:40.955 06:17:47 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:22:40.955 06:17:47 -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:22:40.955 06:17:47 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:22:44.239 06:17:50 -- host/perf.sh@72 -- # ls_guid=d5d25ece-ac44-4f6e-9281-7c2673c59918 00:22:44.239 06:17:50 -- host/perf.sh@73 -- # get_lvs_free_mb d5d25ece-ac44-4f6e-9281-7c2673c59918 00:22:44.239 06:17:50 -- common/autotest_common.sh@1343 -- # local lvs_uuid=d5d25ece-ac44-4f6e-9281-7c2673c59918 00:22:44.239 06:17:50 -- common/autotest_common.sh@1344 -- # local lvs_info 00:22:44.239 06:17:50 -- common/autotest_common.sh@1345 -- # local fc 00:22:44.239 06:17:50 -- common/autotest_common.sh@1346 -- # local cs 00:22:44.239 06:17:50 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:44.497 06:17:50 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:22:44.497 { 00:22:44.497 "uuid": "d5d25ece-ac44-4f6e-9281-7c2673c59918", 00:22:44.497 "name": "lvs_0", 00:22:44.497 "base_bdev": "Nvme0n1", 00:22:44.497 "total_data_clusters": 238234, 00:22:44.497 "free_clusters": 238234, 00:22:44.497 "block_size": 512, 00:22:44.497 "cluster_size": 4194304 00:22:44.497 } 00:22:44.497 ]' 00:22:44.497 06:17:50 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="d5d25ece-ac44-4f6e-9281-7c2673c59918") .free_clusters' 00:22:44.755 06:17:51 -- common/autotest_common.sh@1348 -- # fc=238234 00:22:44.755 06:17:51 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="d5d25ece-ac44-4f6e-9281-7c2673c59918") .cluster_size' 00:22:44.755 06:17:51 -- common/autotest_common.sh@1349 -- # cs=4194304 00:22:44.755 06:17:51 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:22:44.755 06:17:51 -- common/autotest_common.sh@1353 -- # echo 952936 00:22:44.755 952936 00:22:44.755 06:17:51 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:22:44.755 06:17:51 -- host/perf.sh@78 -- # free_mb=20480 00:22:44.755 06:17:51 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d5d25ece-ac44-4f6e-9281-7c2673c59918 lbd_0 20480 00:22:45.013 06:17:51 -- host/perf.sh@80 -- # lb_guid=3c69c929-069f-46c8-99e7-06ab2ce877d0 00:22:45.013 06:17:51 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 3c69c929-069f-46c8-99e7-06ab2ce877d0 lvs_n_0 00:22:45.994 06:17:52 -- host/perf.sh@83 -- # ls_nested_guid=13c492ca-381d-4a46-ba6a-5d68958d7e8c 00:22:45.994 06:17:52 -- host/perf.sh@84 -- # get_lvs_free_mb 13c492ca-381d-4a46-ba6a-5d68958d7e8c 00:22:45.994 06:17:52 -- common/autotest_common.sh@1343 -- # local lvs_uuid=13c492ca-381d-4a46-ba6a-5d68958d7e8c 00:22:45.994 06:17:52 -- common/autotest_common.sh@1344 -- # local lvs_info 00:22:45.994 06:17:52 -- common/autotest_common.sh@1345 -- # local fc 00:22:45.994 06:17:52 -- common/autotest_common.sh@1346 -- # local cs 00:22:45.994 06:17:52 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:45.994 06:17:52 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:22:45.994 { 00:22:45.994 "uuid": "d5d25ece-ac44-4f6e-9281-7c2673c59918", 00:22:45.994 "name": "lvs_0", 00:22:45.994 "base_bdev": "Nvme0n1", 00:22:45.994 "total_data_clusters": 238234, 00:22:45.994 "free_clusters": 233114, 00:22:45.994 "block_size": 512, 00:22:45.994 "cluster_size": 4194304 00:22:45.994 }, 00:22:45.994 { 00:22:45.994 "uuid": "13c492ca-381d-4a46-ba6a-5d68958d7e8c", 00:22:45.994 "name": "lvs_n_0", 00:22:45.994 "base_bdev": "3c69c929-069f-46c8-99e7-06ab2ce877d0", 00:22:45.994 "total_data_clusters": 5114, 00:22:45.994 "free_clusters": 5114, 00:22:45.994 "block_size": 512, 00:22:45.994 "cluster_size": 4194304 00:22:45.994 } 00:22:45.994 ]' 00:22:45.994 06:17:52 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="13c492ca-381d-4a46-ba6a-5d68958d7e8c") .free_clusters' 00:22:46.279 06:17:52 -- common/autotest_common.sh@1348 -- # fc=5114 00:22:46.279 06:17:52 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="13c492ca-381d-4a46-ba6a-5d68958d7e8c") .cluster_size' 00:22:46.279 06:17:52 -- common/autotest_common.sh@1349 -- # cs=4194304 00:22:46.279 06:17:52 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:22:46.279 06:17:52 -- common/autotest_common.sh@1353 -- # echo 20456 00:22:46.279 20456 00:22:46.279 06:17:52 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:22:46.279 06:17:52 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 13c492ca-381d-4a46-ba6a-5d68958d7e8c lbd_nest_0 20456 00:22:46.546 06:17:52 -- host/perf.sh@88 -- # lb_nested_guid=9a4998ed-b3b9-4bd8-ada7-4ff9dcdb8619 00:22:46.546 06:17:52 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:46.546 06:17:53 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:22:46.546 06:17:53 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 9a4998ed-b3b9-4bd8-ada7-4ff9dcdb8619 00:22:46.807 06:17:53 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:47.065 06:17:53 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:22:47.066 06:17:53 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:22:47.066 06:17:53 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:22:47.066 06:17:53 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:22:47.066 06:17:53 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:47.066 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.266 Initializing NVMe Controllers 00:22:59.266 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.266 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:59.266 Initialization complete. Launching workers. 00:22:59.266 ======================================================== 00:22:59.266 Latency(us) 00:22:59.266 Device Information : IOPS MiB/s Average min max 00:22:59.266 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 50.40 0.02 19887.32 202.91 47880.37 00:22:59.266 ======================================================== 00:22:59.266 Total : 50.40 0.02 19887.32 202.91 47880.37 00:22:59.266 00:22:59.266 06:18:03 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:22:59.266 06:18:03 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:59.266 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.237 Initializing NVMe Controllers 00:23:09.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:09.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:09.237 Initialization complete. Launching workers. 00:23:09.237 ======================================================== 00:23:09.237 Latency(us) 00:23:09.237 Device Information : IOPS MiB/s Average min max 00:23:09.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 75.50 9.44 13262.32 5001.22 50878.43 00:23:09.237 ======================================================== 00:23:09.237 Total : 75.50 9.44 13262.32 5001.22 50878.43 00:23:09.237 00:23:09.237 06:18:14 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:09.237 06:18:14 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:09.237 06:18:14 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:09.237 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.213 Initializing NVMe Controllers 00:23:19.213 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:19.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:19.213 Initialization complete. Launching workers. 00:23:19.213 ======================================================== 00:23:19.213 Latency(us) 00:23:19.213 Device Information : IOPS MiB/s Average min max 00:23:19.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7630.35 3.73 4193.35 285.44 12079.91 00:23:19.213 ======================================================== 00:23:19.213 Total : 7630.35 3.73 4193.35 285.44 12079.91 00:23:19.213 00:23:19.213 06:18:24 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:19.213 06:18:24 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:19.213 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.188 Initializing NVMe Controllers 00:23:29.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:29.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:29.188 Initialization complete. Launching workers. 00:23:29.188 ======================================================== 00:23:29.188 Latency(us) 00:23:29.188 Device Information : IOPS MiB/s Average min max 00:23:29.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2424.63 303.08 13206.05 1006.74 30880.67 00:23:29.188 ======================================================== 00:23:29.188 Total : 2424.63 303.08 13206.05 1006.74 30880.67 00:23:29.188 00:23:29.188 06:18:34 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:29.188 06:18:34 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:29.188 06:18:34 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:29.188 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.199 Initializing NVMe Controllers 00:23:39.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:39.199 Controller IO queue size 128, less than required. 00:23:39.199 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:39.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:39.199 Initialization complete. Launching workers. 00:23:39.199 ======================================================== 00:23:39.199 Latency(us) 00:23:39.199 Device Information : IOPS MiB/s Average min max 00:23:39.199 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12047.87 5.88 10630.89 1759.48 24305.86 00:23:39.199 ======================================================== 00:23:39.199 Total : 12047.87 5.88 10630.89 1759.48 24305.86 00:23:39.199 00:23:39.199 06:18:45 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:39.199 06:18:45 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:39.199 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.173 Initializing NVMe Controllers 00:23:49.173 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:49.173 Controller IO queue size 128, less than required. 00:23:49.173 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:49.173 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:49.173 Initialization complete. Launching workers. 00:23:49.173 ======================================================== 00:23:49.173 Latency(us) 00:23:49.173 Device Information : IOPS MiB/s Average min max 00:23:49.173 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1201.10 150.14 107282.68 16392.23 214153.64 00:23:49.173 ======================================================== 00:23:49.173 Total : 1201.10 150.14 107282.68 16392.23 214153.64 00:23:49.173 00:23:49.173 06:18:55 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:49.431 06:18:55 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9a4998ed-b3b9-4bd8-ada7-4ff9dcdb8619 00:23:50.365 06:18:56 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:23:50.365 06:18:56 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3c69c929-069f-46c8-99e7-06ab2ce877d0 00:23:50.623 06:18:57 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:23:50.881 06:18:57 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:50.881 06:18:57 -- host/perf.sh@114 -- # nvmftestfini 00:23:50.881 06:18:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:50.881 06:18:57 -- nvmf/common.sh@116 -- # sync 00:23:50.881 06:18:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:50.881 06:18:57 -- nvmf/common.sh@119 -- # set +e 00:23:50.881 06:18:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:50.881 06:18:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:50.881 rmmod nvme_tcp 00:23:50.881 rmmod nvme_fabrics 00:23:50.881 rmmod nvme_keyring 00:23:51.140 06:18:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:51.140 06:18:57 -- nvmf/common.sh@123 -- # set -e 00:23:51.140 06:18:57 -- nvmf/common.sh@124 -- # return 0 00:23:51.140 06:18:57 -- nvmf/common.sh@477 -- # '[' -n 1192228 ']' 00:23:51.140 06:18:57 -- nvmf/common.sh@478 -- # killprocess 1192228 00:23:51.140 06:18:57 -- common/autotest_common.sh@926 -- # '[' -z 1192228 ']' 00:23:51.140 06:18:57 -- common/autotest_common.sh@930 -- # kill -0 1192228 00:23:51.140 06:18:57 -- common/autotest_common.sh@931 -- # uname 00:23:51.140 06:18:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:51.140 06:18:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1192228 00:23:51.140 06:18:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:51.140 06:18:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:51.140 06:18:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1192228' 00:23:51.140 killing process with pid 1192228 00:23:51.140 06:18:57 -- common/autotest_common.sh@945 -- # kill 1192228 00:23:51.140 06:18:57 -- common/autotest_common.sh@950 -- # wait 1192228 00:23:53.047 06:18:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:53.047 06:18:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:53.047 06:18:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:53.047 06:18:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:53.047 06:18:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:53.047 06:18:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.047 06:18:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.047 06:18:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.953 06:19:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:54.953 00:23:54.953 real 1m31.651s 00:23:54.953 user 5m38.659s 00:23:54.953 sys 0m15.798s 00:23:54.953 06:19:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:54.953 06:19:01 -- common/autotest_common.sh@10 -- # set +x 00:23:54.953 ************************************ 00:23:54.953 END TEST nvmf_perf 00:23:54.953 ************************************ 00:23:54.953 06:19:01 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:54.953 06:19:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:54.953 06:19:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:54.953 06:19:01 -- common/autotest_common.sh@10 -- # set +x 00:23:54.953 ************************************ 00:23:54.953 START TEST nvmf_fio_host 00:23:54.953 ************************************ 00:23:54.953 06:19:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:54.953 * Looking for test storage... 00:23:54.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.953 06:19:01 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.953 06:19:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.953 06:19:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.953 06:19:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.953 06:19:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.953 06:19:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.953 06:19:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.953 06:19:01 -- paths/export.sh@5 -- # export PATH 00:23:54.953 06:19:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.953 06:19:01 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.953 06:19:01 -- nvmf/common.sh@7 -- # uname -s 00:23:54.953 06:19:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.953 06:19:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.953 06:19:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.953 06:19:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.953 06:19:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.953 06:19:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.953 06:19:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.953 06:19:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.953 06:19:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.953 06:19:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.953 06:19:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:54.953 06:19:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:54.953 06:19:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.953 06:19:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.953 06:19:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.953 06:19:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.953 06:19:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.953 06:19:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.953 06:19:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.953 06:19:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.953 06:19:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.953 06:19:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.953 06:19:01 -- paths/export.sh@5 -- # export PATH 00:23:54.953 06:19:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.953 06:19:01 -- nvmf/common.sh@46 -- # : 0 00:23:54.953 06:19:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:54.953 06:19:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:54.953 06:19:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:54.953 06:19:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.953 06:19:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.953 06:19:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:54.953 06:19:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:54.953 06:19:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:54.953 06:19:01 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:54.953 06:19:01 -- host/fio.sh@14 -- # nvmftestinit 00:23:54.953 06:19:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:54.953 06:19:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.953 06:19:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:54.953 06:19:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:54.953 06:19:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:54.953 06:19:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.953 06:19:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:54.953 06:19:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.953 06:19:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:54.953 06:19:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:54.953 06:19:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:54.953 06:19:01 -- common/autotest_common.sh@10 -- # set +x 00:23:56.857 06:19:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:56.857 06:19:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:56.857 06:19:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:56.857 06:19:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:56.857 06:19:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:56.857 06:19:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:56.857 06:19:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:56.857 06:19:03 -- nvmf/common.sh@294 -- # net_devs=() 00:23:56.857 06:19:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:56.857 06:19:03 -- nvmf/common.sh@295 -- # e810=() 00:23:56.857 06:19:03 -- nvmf/common.sh@295 -- # local -ga e810 00:23:56.857 06:19:03 -- nvmf/common.sh@296 -- # x722=() 00:23:56.857 06:19:03 -- nvmf/common.sh@296 -- # local -ga x722 00:23:56.857 06:19:03 -- nvmf/common.sh@297 -- # mlx=() 00:23:56.857 06:19:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:56.857 06:19:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.857 06:19:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.857 06:19:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.857 06:19:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.857 06:19:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.857 06:19:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.857 06:19:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.857 06:19:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.857 06:19:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.857 06:19:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.857 06:19:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.857 06:19:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:56.857 06:19:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:56.857 06:19:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:56.857 06:19:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:56.857 06:19:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:56.857 06:19:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:56.857 06:19:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:56.857 06:19:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:56.857 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:56.857 06:19:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:56.857 06:19:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:56.857 06:19:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.857 06:19:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.857 06:19:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:56.857 06:19:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:56.857 06:19:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:56.857 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:56.857 06:19:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:56.857 06:19:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:56.857 06:19:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.857 06:19:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.857 06:19:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:56.857 06:19:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:56.857 06:19:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:56.857 06:19:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:56.857 06:19:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:56.857 06:19:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.857 06:19:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:56.857 06:19:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.857 06:19:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:56.857 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:56.857 06:19:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.857 06:19:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:56.857 06:19:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.857 06:19:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:56.857 06:19:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.857 06:19:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:56.857 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:56.857 06:19:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.857 06:19:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:56.857 06:19:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:56.857 06:19:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:56.857 06:19:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:56.857 06:19:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:56.857 06:19:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.857 06:19:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.857 06:19:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.857 06:19:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:56.857 06:19:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.857 06:19:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.857 06:19:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:56.857 06:19:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.857 06:19:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.857 06:19:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:56.857 06:19:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:56.857 06:19:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.857 06:19:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.857 06:19:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.858 06:19:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.858 06:19:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:56.858 06:19:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.858 06:19:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.858 06:19:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.858 06:19:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:56.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:23:56.858 00:23:56.858 --- 10.0.0.2 ping statistics --- 00:23:56.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.858 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:23:56.858 06:19:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:23:56.858 00:23:56.858 --- 10.0.0.1 ping statistics --- 00:23:56.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.858 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:23:56.858 06:19:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.858 06:19:03 -- nvmf/common.sh@410 -- # return 0 00:23:56.858 06:19:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:56.858 06:19:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.858 06:19:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:56.858 06:19:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:56.858 06:19:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.858 06:19:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:56.858 06:19:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:57.116 06:19:03 -- host/fio.sh@16 -- # [[ y != y ]] 00:23:57.116 06:19:03 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:57.116 06:19:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:57.116 06:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:57.116 06:19:03 -- host/fio.sh@24 -- # nvmfpid=1205300 00:23:57.116 06:19:03 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:57.116 06:19:03 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:57.116 06:19:03 -- host/fio.sh@28 -- # waitforlisten 1205300 00:23:57.116 06:19:03 -- common/autotest_common.sh@819 -- # '[' -z 1205300 ']' 00:23:57.116 06:19:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.116 06:19:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:57.116 06:19:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.116 06:19:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:57.116 06:19:03 -- common/autotest_common.sh@10 -- # set +x 00:23:57.116 [2024-07-13 06:19:03.417956] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:57.117 [2024-07-13 06:19:03.418032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.117 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.117 [2024-07-13 06:19:03.487552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.117 [2024-07-13 06:19:03.602447] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:57.117 [2024-07-13 06:19:03.602583] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.117 [2024-07-13 06:19:03.602600] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.117 [2024-07-13 06:19:03.602613] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.117 [2024-07-13 06:19:03.602668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.117 [2024-07-13 06:19:03.605886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.117 [2024-07-13 06:19:03.605923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.117 [2024-07-13 06:19:03.605927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.051 06:19:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:58.051 06:19:04 -- common/autotest_common.sh@852 -- # return 0 00:23:58.051 06:19:04 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:58.308 [2024-07-13 06:19:04.584006] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.308 06:19:04 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:58.308 06:19:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:58.308 06:19:04 -- common/autotest_common.sh@10 -- # set +x 00:23:58.308 06:19:04 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:58.567 Malloc1 00:23:58.567 06:19:04 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:58.824 06:19:05 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:59.082 06:19:05 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:59.339 [2024-07-13 06:19:05.699952] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.339 06:19:05 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:59.597 06:19:05 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:59.597 06:19:05 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:59.597 06:19:05 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:59.597 06:19:05 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:23:59.597 06:19:05 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:59.597 06:19:05 -- common/autotest_common.sh@1318 -- # local sanitizers 00:23:59.597 06:19:05 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:59.597 06:19:05 -- common/autotest_common.sh@1320 -- # shift 00:23:59.597 06:19:05 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:23:59.597 06:19:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:23:59.597 06:19:05 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:59.597 06:19:05 -- common/autotest_common.sh@1324 -- # grep libasan 00:23:59.597 06:19:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:23:59.597 06:19:05 -- common/autotest_common.sh@1324 -- # asan_lib= 00:23:59.597 06:19:05 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:23:59.597 06:19:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:23:59.597 06:19:05 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:59.597 06:19:05 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:23:59.597 06:19:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:23:59.597 06:19:05 -- common/autotest_common.sh@1324 -- # asan_lib= 00:23:59.597 06:19:05 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:23:59.597 06:19:05 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:59.597 06:19:05 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:59.855 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:59.855 fio-3.35 00:23:59.855 Starting 1 thread 00:23:59.855 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.379 00:24:02.379 test: (groupid=0, jobs=1): err= 0: pid=1205797: Sat Jul 13 06:19:08 2024 00:24:02.379 read: IOPS=9635, BW=37.6MiB/s (39.5MB/s)(75.5MiB/2006msec) 00:24:02.379 slat (nsec): min=1942, max=161998, avg=2416.40, stdev=1853.49 00:24:02.379 clat (usec): min=2241, max=12641, avg=7351.20, stdev=542.36 00:24:02.379 lat (usec): min=2274, max=12644, avg=7353.62, stdev=542.23 00:24:02.379 clat percentiles (usec): 00:24:02.379 | 1.00th=[ 6128], 5.00th=[ 6521], 10.00th=[ 6718], 20.00th=[ 6915], 00:24:02.379 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:24:02.379 | 70.00th=[ 7635], 80.00th=[ 7767], 90.00th=[ 7963], 95.00th=[ 8160], 00:24:02.379 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[11076], 99.95th=[11994], 00:24:02.379 | 99.99th=[12125] 00:24:02.379 bw ( KiB/s): min=37696, max=39168, per=99.94%, avg=38518.00, stdev=633.21, samples=4 00:24:02.379 iops : min= 9424, max= 9792, avg=9629.50, stdev=158.30, samples=4 00:24:02.379 write: IOPS=9641, BW=37.7MiB/s (39.5MB/s)(75.5MiB/2006msec); 0 zone resets 00:24:02.379 slat (usec): min=2, max=150, avg= 2.57, stdev= 1.51 00:24:02.379 clat (usec): min=1464, max=11170, avg=5886.87, stdev=468.61 00:24:02.379 lat (usec): min=1473, max=11172, avg=5889.44, stdev=468.54 00:24:02.379 clat percentiles (usec): 00:24:02.379 | 1.00th=[ 4817], 5.00th=[ 5145], 10.00th=[ 5342], 20.00th=[ 5538], 00:24:02.379 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5866], 60.00th=[ 5997], 00:24:02.379 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:24:02.379 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 8586], 99.95th=[ 9634], 00:24:02.379 | 99.99th=[11076] 00:24:02.379 bw ( KiB/s): min=38344, max=38912, per=99.99%, avg=38562.00, stdev=245.48, samples=4 00:24:02.379 iops : min= 9586, max= 9728, avg=9640.50, stdev=61.37, samples=4 00:24:02.379 lat (msec) : 2=0.02%, 4=0.11%, 10=99.77%, 20=0.10% 00:24:02.379 cpu : usr=59.95%, sys=34.66%, ctx=31, majf=0, minf=5 00:24:02.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:02.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:02.379 issued rwts: total=19328,19340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.379 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:02.379 00:24:02.379 Run status group 0 (all jobs): 00:24:02.379 READ: bw=37.6MiB/s (39.5MB/s), 37.6MiB/s-37.6MiB/s (39.5MB/s-39.5MB/s), io=75.5MiB (79.2MB), run=2006-2006msec 00:24:02.379 WRITE: bw=37.7MiB/s (39.5MB/s), 37.7MiB/s-37.7MiB/s (39.5MB/s-39.5MB/s), io=75.5MiB (79.2MB), run=2006-2006msec 00:24:02.379 06:19:08 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:02.379 06:19:08 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:02.379 06:19:08 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:02.379 06:19:08 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:02.379 06:19:08 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:02.379 06:19:08 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:02.379 06:19:08 -- common/autotest_common.sh@1320 -- # shift 00:24:02.379 06:19:08 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:02.380 06:19:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:02.380 06:19:08 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:02.380 06:19:08 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:02.380 06:19:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:02.380 06:19:08 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:02.380 06:19:08 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:02.380 06:19:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:02.380 06:19:08 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:02.380 06:19:08 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:02.380 06:19:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:02.380 06:19:08 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:02.380 06:19:08 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:02.380 06:19:08 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:02.380 06:19:08 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:02.380 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:02.380 fio-3.35 00:24:02.380 Starting 1 thread 00:24:02.380 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.907 00:24:04.907 test: (groupid=0, jobs=1): err= 0: pid=1206140: Sat Jul 13 06:19:11 2024 00:24:04.907 read: IOPS=8470, BW=132MiB/s (139MB/s)(266MiB/2011msec) 00:24:04.907 slat (nsec): min=2728, max=96291, avg=3588.24, stdev=1540.00 00:24:04.907 clat (usec): min=1788, max=17035, avg=8888.05, stdev=2182.15 00:24:04.907 lat (usec): min=1792, max=17039, avg=8891.64, stdev=2182.20 00:24:04.907 clat percentiles (usec): 00:24:04.907 | 1.00th=[ 4555], 5.00th=[ 5407], 10.00th=[ 6194], 20.00th=[ 7046], 00:24:04.907 | 30.00th=[ 7635], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9241], 00:24:04.907 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[11731], 95.00th=[12649], 00:24:04.907 | 99.00th=[14877], 99.50th=[15533], 99.90th=[16450], 99.95th=[16450], 00:24:04.907 | 99.99th=[16909] 00:24:04.907 bw ( KiB/s): min=62304, max=78400, per=50.93%, avg=69024.00, stdev=6851.64, samples=4 00:24:04.907 iops : min= 3894, max= 4900, avg=4314.00, stdev=428.23, samples=4 00:24:04.907 write: IOPS=4923, BW=76.9MiB/s (80.7MB/s)(141MiB/1838msec); 0 zone resets 00:24:04.907 slat (usec): min=30, max=149, avg=33.08, stdev= 4.48 00:24:04.907 clat (usec): min=3622, max=20468, avg=10917.44, stdev=1989.28 00:24:04.907 lat (usec): min=3654, max=20500, avg=10950.52, stdev=1989.26 00:24:04.907 clat percentiles (usec): 00:24:04.907 | 1.00th=[ 6980], 5.00th=[ 8160], 10.00th=[ 8586], 20.00th=[ 9241], 00:24:04.907 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10683], 60.00th=[11207], 00:24:04.907 | 70.00th=[11863], 80.00th=[12649], 90.00th=[13698], 95.00th=[14484], 00:24:04.907 | 99.00th=[15926], 99.50th=[16450], 99.90th=[17695], 99.95th=[19792], 00:24:04.907 | 99.99th=[20579] 00:24:04.907 bw ( KiB/s): min=64384, max=80576, per=91.24%, avg=71872.00, stdev=6684.86, samples=4 00:24:04.907 iops : min= 4024, max= 5036, avg=4492.00, stdev=417.80, samples=4 00:24:04.907 lat (msec) : 2=0.02%, 4=0.21%, 10=58.57%, 20=41.19%, 50=0.02% 00:24:04.907 cpu : usr=74.48%, sys=22.34%, ctx=36, majf=0, minf=1 00:24:04.907 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:04.907 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:04.907 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:04.907 issued rwts: total=17034,9049,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:04.907 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:04.907 00:24:04.907 Run status group 0 (all jobs): 00:24:04.907 READ: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=266MiB (279MB), run=2011-2011msec 00:24:04.907 WRITE: bw=76.9MiB/s (80.7MB/s), 76.9MiB/s-76.9MiB/s (80.7MB/s-80.7MB/s), io=141MiB (148MB), run=1838-1838msec 00:24:04.907 06:19:11 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:04.907 06:19:11 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:24:04.907 06:19:11 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:24:04.907 06:19:11 -- host/fio.sh@51 -- # get_nvme_bdfs 00:24:04.907 06:19:11 -- common/autotest_common.sh@1498 -- # bdfs=() 00:24:04.907 06:19:11 -- common/autotest_common.sh@1498 -- # local bdfs 00:24:04.907 06:19:11 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:04.907 06:19:11 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:04.907 06:19:11 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:24:04.907 06:19:11 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:24:04.907 06:19:11 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:24:04.907 06:19:11 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:24:08.183 Nvme0n1 00:24:08.183 06:19:14 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:24:11.462 06:19:17 -- host/fio.sh@53 -- # ls_guid=8fdc5c34-2cfb-4b78-87bc-e0e227449612 00:24:11.462 06:19:17 -- host/fio.sh@54 -- # get_lvs_free_mb 8fdc5c34-2cfb-4b78-87bc-e0e227449612 00:24:11.462 06:19:17 -- common/autotest_common.sh@1343 -- # local lvs_uuid=8fdc5c34-2cfb-4b78-87bc-e0e227449612 00:24:11.462 06:19:17 -- common/autotest_common.sh@1344 -- # local lvs_info 00:24:11.462 06:19:17 -- common/autotest_common.sh@1345 -- # local fc 00:24:11.462 06:19:17 -- common/autotest_common.sh@1346 -- # local cs 00:24:11.462 06:19:17 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:11.462 06:19:17 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:24:11.462 { 00:24:11.462 "uuid": "8fdc5c34-2cfb-4b78-87bc-e0e227449612", 00:24:11.462 "name": "lvs_0", 00:24:11.462 "base_bdev": "Nvme0n1", 00:24:11.462 "total_data_clusters": 930, 00:24:11.462 "free_clusters": 930, 00:24:11.462 "block_size": 512, 00:24:11.462 "cluster_size": 1073741824 00:24:11.462 } 00:24:11.462 ]' 00:24:11.462 06:19:17 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="8fdc5c34-2cfb-4b78-87bc-e0e227449612") .free_clusters' 00:24:11.462 06:19:17 -- common/autotest_common.sh@1348 -- # fc=930 00:24:11.462 06:19:17 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="8fdc5c34-2cfb-4b78-87bc-e0e227449612") .cluster_size' 00:24:11.462 06:19:17 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:24:11.462 06:19:17 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:24:11.462 06:19:17 -- common/autotest_common.sh@1353 -- # echo 952320 00:24:11.462 952320 00:24:11.462 06:19:17 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:24:11.720 c101f975-6303-4bea-9145-cd1cb54aae4a 00:24:11.720 06:19:18 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:24:11.978 06:19:18 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:24:12.236 06:19:18 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:12.493 06:19:18 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:12.493 06:19:18 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:12.493 06:19:18 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:12.493 06:19:18 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:12.494 06:19:18 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:12.494 06:19:18 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:12.494 06:19:18 -- common/autotest_common.sh@1320 -- # shift 00:24:12.494 06:19:18 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:12.494 06:19:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:12.494 06:19:18 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:12.494 06:19:18 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:12.494 06:19:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:12.494 06:19:18 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:12.494 06:19:18 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:12.494 06:19:18 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:12.494 06:19:18 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:12.494 06:19:18 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:12.494 06:19:18 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:12.494 06:19:18 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:12.494 06:19:18 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:12.494 06:19:18 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:12.494 06:19:18 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:12.494 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:12.494 fio-3.35 00:24:12.494 Starting 1 thread 00:24:12.751 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.277 00:24:15.277 test: (groupid=0, jobs=1): err= 0: pid=1207458: Sat Jul 13 06:19:21 2024 00:24:15.277 read: IOPS=5723, BW=22.4MiB/s (23.4MB/s)(44.9MiB/2008msec) 00:24:15.277 slat (nsec): min=1888, max=144535, avg=2475.71, stdev=2012.99 00:24:15.277 clat (usec): min=967, max=171737, avg=12369.44, stdev=11836.09 00:24:15.277 lat (usec): min=970, max=171774, avg=12371.92, stdev=11836.40 00:24:15.277 clat percentiles (msec): 00:24:15.277 | 1.00th=[ 10], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 11], 00:24:15.277 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:24:15.277 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 13], 95.00th=[ 14], 00:24:15.277 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:24:15.277 | 99.99th=[ 171] 00:24:15.277 bw ( KiB/s): min=15984, max=25464, per=99.87%, avg=22862.00, stdev=4590.62, samples=4 00:24:15.277 iops : min= 3996, max= 6366, avg=5715.50, stdev=1147.66, samples=4 00:24:15.277 write: IOPS=5712, BW=22.3MiB/s (23.4MB/s)(44.8MiB/2008msec); 0 zone resets 00:24:15.277 slat (nsec): min=1970, max=123537, avg=2611.67, stdev=1764.97 00:24:15.277 clat (usec): min=316, max=169663, avg=9865.30, stdev=11143.41 00:24:15.277 lat (usec): min=319, max=169669, avg=9867.91, stdev=11143.71 00:24:15.277 clat percentiles (msec): 00:24:15.277 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 9], 00:24:15.277 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 10], 60.00th=[ 10], 00:24:15.277 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 11], 95.00th=[ 11], 00:24:15.277 | 99.00th=[ 12], 99.50th=[ 153], 99.90th=[ 169], 99.95th=[ 169], 00:24:15.277 | 99.99th=[ 169] 00:24:15.277 bw ( KiB/s): min=17000, max=24832, per=99.82%, avg=22810.00, stdev=3874.16, samples=4 00:24:15.277 iops : min= 4250, max= 6208, avg=5702.50, stdev=968.54, samples=4 00:24:15.277 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:24:15.277 lat (msec) : 2=0.02%, 4=0.11%, 10=46.71%, 20=52.58%, 250=0.56% 00:24:15.277 cpu : usr=53.81%, sys=42.65%, ctx=90, majf=0, minf=5 00:24:15.277 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:24:15.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:15.277 issued rwts: total=11492,11471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.277 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:15.277 00:24:15.277 Run status group 0 (all jobs): 00:24:15.277 READ: bw=22.4MiB/s (23.4MB/s), 22.4MiB/s-22.4MiB/s (23.4MB/s-23.4MB/s), io=44.9MiB (47.1MB), run=2008-2008msec 00:24:15.277 WRITE: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.8MiB (47.0MB), run=2008-2008msec 00:24:15.277 06:19:21 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:15.277 06:19:21 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:24:16.647 06:19:22 -- host/fio.sh@64 -- # ls_nested_guid=0e5b4423-cee8-402d-a8e7-97f352110b56 00:24:16.647 06:19:22 -- host/fio.sh@65 -- # get_lvs_free_mb 0e5b4423-cee8-402d-a8e7-97f352110b56 00:24:16.647 06:19:22 -- common/autotest_common.sh@1343 -- # local lvs_uuid=0e5b4423-cee8-402d-a8e7-97f352110b56 00:24:16.647 06:19:22 -- common/autotest_common.sh@1344 -- # local lvs_info 00:24:16.647 06:19:22 -- common/autotest_common.sh@1345 -- # local fc 00:24:16.647 06:19:22 -- common/autotest_common.sh@1346 -- # local cs 00:24:16.647 06:19:22 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:16.647 06:19:22 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:24:16.647 { 00:24:16.647 "uuid": "8fdc5c34-2cfb-4b78-87bc-e0e227449612", 00:24:16.647 "name": "lvs_0", 00:24:16.647 "base_bdev": "Nvme0n1", 00:24:16.647 "total_data_clusters": 930, 00:24:16.647 "free_clusters": 0, 00:24:16.647 "block_size": 512, 00:24:16.648 "cluster_size": 1073741824 00:24:16.648 }, 00:24:16.648 { 00:24:16.648 "uuid": "0e5b4423-cee8-402d-a8e7-97f352110b56", 00:24:16.648 "name": "lvs_n_0", 00:24:16.648 "base_bdev": "c101f975-6303-4bea-9145-cd1cb54aae4a", 00:24:16.648 "total_data_clusters": 237847, 00:24:16.648 "free_clusters": 237847, 00:24:16.648 "block_size": 512, 00:24:16.648 "cluster_size": 4194304 00:24:16.648 } 00:24:16.648 ]' 00:24:16.648 06:19:22 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="0e5b4423-cee8-402d-a8e7-97f352110b56") .free_clusters' 00:24:16.648 06:19:23 -- common/autotest_common.sh@1348 -- # fc=237847 00:24:16.648 06:19:23 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="0e5b4423-cee8-402d-a8e7-97f352110b56") .cluster_size' 00:24:16.648 06:19:23 -- common/autotest_common.sh@1349 -- # cs=4194304 00:24:16.648 06:19:23 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:24:16.648 06:19:23 -- common/autotest_common.sh@1353 -- # echo 951388 00:24:16.648 951388 00:24:16.648 06:19:23 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:24:17.212 8ef403cd-85e1-4404-9908-d5928fd5706f 00:24:17.212 06:19:23 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:24:17.479 06:19:23 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:24:17.740 06:19:24 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:18.039 06:19:24 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:18.039 06:19:24 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:18.039 06:19:24 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:18.039 06:19:24 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:18.039 06:19:24 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:18.039 06:19:24 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:18.039 06:19:24 -- common/autotest_common.sh@1320 -- # shift 00:24:18.039 06:19:24 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:18.039 06:19:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:18.039 06:19:24 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:18.039 06:19:24 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:18.039 06:19:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:18.039 06:19:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:18.039 06:19:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:18.039 06:19:24 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:18.039 06:19:24 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:18.039 06:19:24 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:18.039 06:19:24 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:18.039 06:19:24 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:18.039 06:19:24 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:18.039 06:19:24 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:18.039 06:19:24 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:18.297 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:18.297 fio-3.35 00:24:18.297 Starting 1 thread 00:24:18.297 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.827 00:24:20.827 test: (groupid=0, jobs=1): err= 0: pid=1208336: Sat Jul 13 06:19:27 2024 00:24:20.827 read: IOPS=6039, BW=23.6MiB/s (24.7MB/s)(47.4MiB/2009msec) 00:24:20.827 slat (nsec): min=1965, max=228298, avg=2636.77, stdev=2944.23 00:24:20.827 clat (usec): min=4825, max=18649, avg=11750.38, stdev=949.38 00:24:20.827 lat (usec): min=4836, max=18652, avg=11753.01, stdev=949.23 00:24:20.827 clat percentiles (usec): 00:24:20.827 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:24:20.827 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:24:20.827 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[13173], 00:24:20.827 | 99.00th=[13960], 99.50th=[14091], 99.90th=[15926], 99.95th=[17433], 00:24:20.827 | 99.99th=[18482] 00:24:20.827 bw ( KiB/s): min=23096, max=24696, per=99.84%, avg=24120.00, stdev=703.03, samples=4 00:24:20.827 iops : min= 5774, max= 6174, avg=6030.00, stdev=175.76, samples=4 00:24:20.827 write: IOPS=6020, BW=23.5MiB/s (24.7MB/s)(47.2MiB/2009msec); 0 zone resets 00:24:20.827 slat (usec): min=2, max=224, avg= 2.80, stdev= 2.47 00:24:20.827 clat (usec): min=2465, max=17274, avg=9340.23, stdev=858.22 00:24:20.827 lat (usec): min=2475, max=17277, avg=9343.03, stdev=858.19 00:24:20.827 clat percentiles (usec): 00:24:20.827 | 1.00th=[ 7308], 5.00th=[ 8029], 10.00th=[ 8356], 20.00th=[ 8717], 00:24:20.827 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:24:20.827 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10290], 95.00th=[10552], 00:24:20.827 | 99.00th=[11207], 99.50th=[11600], 99.90th=[15533], 99.95th=[15926], 00:24:20.827 | 99.99th=[17171] 00:24:20.827 bw ( KiB/s): min=24000, max=24192, per=100.00%, avg=24086.00, stdev=79.83, samples=4 00:24:20.827 iops : min= 6000, max= 6048, avg=6021.50, stdev=19.96, samples=4 00:24:20.827 lat (msec) : 4=0.05%, 10=41.30%, 20=58.65% 00:24:20.827 cpu : usr=53.24%, sys=43.03%, ctx=109, majf=0, minf=5 00:24:20.827 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:24:20.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:20.827 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:20.827 issued rwts: total=12134,12096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:20.827 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:20.827 00:24:20.827 Run status group 0 (all jobs): 00:24:20.827 READ: bw=23.6MiB/s (24.7MB/s), 23.6MiB/s-23.6MiB/s (24.7MB/s-24.7MB/s), io=47.4MiB (49.7MB), run=2009-2009msec 00:24:20.827 WRITE: bw=23.5MiB/s (24.7MB/s), 23.5MiB/s-23.5MiB/s (24.7MB/s-24.7MB/s), io=47.2MiB (49.5MB), run=2009-2009msec 00:24:20.827 06:19:27 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:21.085 06:19:27 -- host/fio.sh@74 -- # sync 00:24:21.085 06:19:27 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:24:25.273 06:19:31 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:24:25.273 06:19:31 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:24:27.804 06:19:34 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:24:28.369 06:19:34 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:24:30.270 06:19:36 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:30.270 06:19:36 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:30.270 06:19:36 -- host/fio.sh@86 -- # nvmftestfini 00:24:30.270 06:19:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:30.270 06:19:36 -- nvmf/common.sh@116 -- # sync 00:24:30.270 06:19:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:30.270 06:19:36 -- nvmf/common.sh@119 -- # set +e 00:24:30.270 06:19:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:30.270 06:19:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:30.270 rmmod nvme_tcp 00:24:30.270 rmmod nvme_fabrics 00:24:30.270 rmmod nvme_keyring 00:24:30.270 06:19:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:30.270 06:19:36 -- nvmf/common.sh@123 -- # set -e 00:24:30.270 06:19:36 -- nvmf/common.sh@124 -- # return 0 00:24:30.270 06:19:36 -- nvmf/common.sh@477 -- # '[' -n 1205300 ']' 00:24:30.270 06:19:36 -- nvmf/common.sh@478 -- # killprocess 1205300 00:24:30.270 06:19:36 -- common/autotest_common.sh@926 -- # '[' -z 1205300 ']' 00:24:30.270 06:19:36 -- common/autotest_common.sh@930 -- # kill -0 1205300 00:24:30.270 06:19:36 -- common/autotest_common.sh@931 -- # uname 00:24:30.270 06:19:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:30.270 06:19:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1205300 00:24:30.270 06:19:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:30.270 06:19:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:30.270 06:19:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1205300' 00:24:30.270 killing process with pid 1205300 00:24:30.270 06:19:36 -- common/autotest_common.sh@945 -- # kill 1205300 00:24:30.270 06:19:36 -- common/autotest_common.sh@950 -- # wait 1205300 00:24:30.528 06:19:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:30.528 06:19:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:30.528 06:19:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:30.528 06:19:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:30.528 06:19:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:30.528 06:19:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.528 06:19:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.528 06:19:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.432 06:19:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:32.432 00:24:32.432 real 0m37.724s 00:24:32.432 user 2m24.139s 00:24:32.432 sys 0m7.262s 00:24:32.432 06:19:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:32.432 06:19:38 -- common/autotest_common.sh@10 -- # set +x 00:24:32.432 ************************************ 00:24:32.432 END TEST nvmf_fio_host 00:24:32.432 ************************************ 00:24:32.432 06:19:38 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:32.432 06:19:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:32.432 06:19:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:32.432 06:19:38 -- common/autotest_common.sh@10 -- # set +x 00:24:32.432 ************************************ 00:24:32.432 START TEST nvmf_failover 00:24:32.432 ************************************ 00:24:32.432 06:19:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:32.690 * Looking for test storage... 00:24:32.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:32.690 06:19:38 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.690 06:19:38 -- nvmf/common.sh@7 -- # uname -s 00:24:32.690 06:19:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.690 06:19:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.690 06:19:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.690 06:19:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.690 06:19:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.690 06:19:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.690 06:19:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.690 06:19:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.690 06:19:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.690 06:19:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.690 06:19:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:32.690 06:19:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:32.690 06:19:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.690 06:19:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.690 06:19:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.690 06:19:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.690 06:19:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.690 06:19:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.690 06:19:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.690 06:19:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.690 06:19:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.690 06:19:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.690 06:19:38 -- paths/export.sh@5 -- # export PATH 00:24:32.690 06:19:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.690 06:19:38 -- nvmf/common.sh@46 -- # : 0 00:24:32.690 06:19:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:32.690 06:19:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:32.690 06:19:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:32.690 06:19:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.690 06:19:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.690 06:19:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:32.690 06:19:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:32.690 06:19:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:32.690 06:19:38 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:32.690 06:19:38 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:32.690 06:19:38 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:32.690 06:19:38 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:32.690 06:19:38 -- host/failover.sh@18 -- # nvmftestinit 00:24:32.690 06:19:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:32.690 06:19:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.690 06:19:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:32.690 06:19:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:32.690 06:19:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:32.690 06:19:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.690 06:19:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.690 06:19:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.690 06:19:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:32.690 06:19:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:32.690 06:19:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:32.690 06:19:38 -- common/autotest_common.sh@10 -- # set +x 00:24:34.592 06:19:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:34.592 06:19:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:34.592 06:19:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:34.592 06:19:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:34.592 06:19:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:34.592 06:19:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:34.592 06:19:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:34.592 06:19:41 -- nvmf/common.sh@294 -- # net_devs=() 00:24:34.592 06:19:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:34.592 06:19:41 -- nvmf/common.sh@295 -- # e810=() 00:24:34.592 06:19:41 -- nvmf/common.sh@295 -- # local -ga e810 00:24:34.592 06:19:41 -- nvmf/common.sh@296 -- # x722=() 00:24:34.592 06:19:41 -- nvmf/common.sh@296 -- # local -ga x722 00:24:34.592 06:19:41 -- nvmf/common.sh@297 -- # mlx=() 00:24:34.592 06:19:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:34.592 06:19:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.593 06:19:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.593 06:19:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.593 06:19:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.593 06:19:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.593 06:19:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.593 06:19:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.593 06:19:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.593 06:19:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.593 06:19:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.593 06:19:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.593 06:19:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:34.593 06:19:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:34.593 06:19:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:34.593 06:19:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:34.593 06:19:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:34.593 06:19:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:34.593 06:19:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:34.593 06:19:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:34.593 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:34.593 06:19:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:34.593 06:19:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:34.593 06:19:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.593 06:19:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.593 06:19:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:34.593 06:19:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:34.593 06:19:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:34.593 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:34.593 06:19:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:34.593 06:19:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:34.593 06:19:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.593 06:19:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.593 06:19:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:34.593 06:19:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:34.593 06:19:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:34.593 06:19:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:34.593 06:19:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:34.593 06:19:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.593 06:19:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:34.593 06:19:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.593 06:19:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:34.593 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:34.593 06:19:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.593 06:19:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:34.593 06:19:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.593 06:19:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:34.593 06:19:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.593 06:19:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:34.593 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:34.593 06:19:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.593 06:19:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:34.593 06:19:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:34.593 06:19:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:34.593 06:19:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:34.593 06:19:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:34.593 06:19:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.593 06:19:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.593 06:19:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.593 06:19:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:34.593 06:19:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.593 06:19:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.593 06:19:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:34.593 06:19:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.593 06:19:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.593 06:19:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:34.593 06:19:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:34.593 06:19:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.593 06:19:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.854 06:19:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.854 06:19:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.854 06:19:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:34.854 06:19:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.854 06:19:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.854 06:19:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.854 06:19:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:34.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:24:34.854 00:24:34.854 --- 10.0.0.2 ping statistics --- 00:24:34.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.854 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:24:34.854 06:19:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:24:34.854 00:24:34.854 --- 10.0.0.1 ping statistics --- 00:24:34.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.854 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:24:34.854 06:19:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.854 06:19:41 -- nvmf/common.sh@410 -- # return 0 00:24:34.854 06:19:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:34.854 06:19:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.854 06:19:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:34.854 06:19:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:34.854 06:19:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.854 06:19:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:34.854 06:19:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:34.854 06:19:41 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:34.854 06:19:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:34.854 06:19:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:34.854 06:19:41 -- common/autotest_common.sh@10 -- # set +x 00:24:34.854 06:19:41 -- nvmf/common.sh@469 -- # nvmfpid=1211632 00:24:34.854 06:19:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:34.854 06:19:41 -- nvmf/common.sh@470 -- # waitforlisten 1211632 00:24:34.854 06:19:41 -- common/autotest_common.sh@819 -- # '[' -z 1211632 ']' 00:24:34.854 06:19:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.854 06:19:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:34.854 06:19:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.854 06:19:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:34.854 06:19:41 -- common/autotest_common.sh@10 -- # set +x 00:24:34.854 [2024-07-13 06:19:41.260649] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:34.854 [2024-07-13 06:19:41.260739] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.854 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.854 [2024-07-13 06:19:41.327626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:35.115 [2024-07-13 06:19:41.436932] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:35.115 [2024-07-13 06:19:41.437109] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.115 [2024-07-13 06:19:41.437127] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.115 [2024-07-13 06:19:41.437140] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.115 [2024-07-13 06:19:41.437245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:35.115 [2024-07-13 06:19:41.437296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:35.115 [2024-07-13 06:19:41.437299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.050 06:19:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:36.050 06:19:42 -- common/autotest_common.sh@852 -- # return 0 00:24:36.050 06:19:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:36.050 06:19:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:36.050 06:19:42 -- common/autotest_common.sh@10 -- # set +x 00:24:36.050 06:19:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.050 06:19:42 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:36.050 [2024-07-13 06:19:42.431329] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.050 06:19:42 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:36.309 Malloc0 00:24:36.309 06:19:42 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:36.591 06:19:42 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:36.859 06:19:43 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.116 [2024-07-13 06:19:43.433114] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.116 06:19:43 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:37.373 [2024-07-13 06:19:43.673834] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:37.373 06:19:43 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:37.630 [2024-07-13 06:19:43.906668] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:37.630 06:19:43 -- host/failover.sh@31 -- # bdevperf_pid=1211943 00:24:37.630 06:19:43 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:37.630 06:19:43 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:37.630 06:19:43 -- host/failover.sh@34 -- # waitforlisten 1211943 /var/tmp/bdevperf.sock 00:24:37.630 06:19:43 -- common/autotest_common.sh@819 -- # '[' -z 1211943 ']' 00:24:37.630 06:19:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:37.630 06:19:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:37.630 06:19:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:37.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:37.630 06:19:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:37.630 06:19:43 -- common/autotest_common.sh@10 -- # set +x 00:24:38.564 06:19:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:38.564 06:19:44 -- common/autotest_common.sh@852 -- # return 0 00:24:38.564 06:19:44 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:38.822 NVMe0n1 00:24:38.822 06:19:45 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:39.388 00:24:39.388 06:19:45 -- host/failover.sh@39 -- # run_test_pid=1212211 00:24:39.388 06:19:45 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:39.388 06:19:45 -- host/failover.sh@41 -- # sleep 1 00:24:40.324 06:19:46 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.581 [2024-07-13 06:19:46.858548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858851] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.858992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.859004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.859015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.859026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.859040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.859052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.859064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.859076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.581 [2024-07-13 06:19:46.859087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.582 [2024-07-13 06:19:46.859100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.582 [2024-07-13 06:19:46.859111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.582 [2024-07-13 06:19:46.859123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.582 [2024-07-13 06:19:46.859134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.582 [2024-07-13 06:19:46.859146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.582 [2024-07-13 06:19:46.859158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.582 [2024-07-13 06:19:46.859170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.582 [2024-07-13 06:19:46.859182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.582 [2024-07-13 06:19:46.859194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.582 [2024-07-13 06:19:46.859220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.582 [2024-07-13 06:19:46.859232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.582 [2024-07-13 06:19:46.859243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311370 is same with the state(5) to be set 00:24:40.582 06:19:46 -- host/failover.sh@45 -- # sleep 3 00:24:43.867 06:19:49 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:43.867 00:24:43.867 06:19:50 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:44.125 [2024-07-13 06:19:50.471483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.125 [2024-07-13 06:19:50.471556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.125 [2024-07-13 06:19:50.471585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.125 [2024-07-13 06:19:50.471598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.125 [2024-07-13 06:19:50.471610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.471992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 [2024-07-13 06:19:50.472341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311b80 is same with the state(5) to be set 00:24:44.126 06:19:50 -- host/failover.sh@50 -- # sleep 3 00:24:47.409 06:19:53 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:47.409 [2024-07-13 06:19:53.761222] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.409 06:19:53 -- host/failover.sh@55 -- # sleep 1 00:24:48.341 06:19:54 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:48.602 [2024-07-13 06:19:55.010212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.010992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.011004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.011016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.011027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.011039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.011051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.011062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.011074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.011086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.011098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.011110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.011123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.011135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.011147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.011162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 [2024-07-13 06:19:55.011174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20bc2f0 is same with the state(5) to be set 00:24:48.602 06:19:55 -- host/failover.sh@59 -- # wait 1212211 00:24:55.171 0 00:24:55.171 06:20:00 -- host/failover.sh@61 -- # killprocess 1211943 00:24:55.171 06:20:00 -- common/autotest_common.sh@926 -- # '[' -z 1211943 ']' 00:24:55.171 06:20:00 -- common/autotest_common.sh@930 -- # kill -0 1211943 00:24:55.172 06:20:00 -- common/autotest_common.sh@931 -- # uname 00:24:55.172 06:20:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:55.172 06:20:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1211943 00:24:55.172 06:20:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:55.172 06:20:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:55.172 06:20:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1211943' 00:24:55.172 killing process with pid 1211943 00:24:55.172 06:20:00 -- common/autotest_common.sh@945 -- # kill 1211943 00:24:55.172 06:20:00 -- common/autotest_common.sh@950 -- # wait 1211943 00:24:55.172 06:20:01 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:55.172 [2024-07-13 06:19:43.960454] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:55.172 [2024-07-13 06:19:43.960549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1211943 ] 00:24:55.172 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.172 [2024-07-13 06:19:44.023238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.172 [2024-07-13 06:19:44.131483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.172 Running I/O for 15 seconds... 00:24:55.172 [2024-07-13 06:19:46.859561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.859603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.859631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.859648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.859666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:115680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.859682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.859698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.859713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.859729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.859744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.859760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.859775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.859790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.859805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.859821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.859836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.859852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.859873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.859891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.859906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.859922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.859936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.859959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.859974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.859990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.860005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.860022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.860037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.860053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.860068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.860084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.860098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.860115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.860130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.860146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.860160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.860192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.860206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.860221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.860235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.860251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.860265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.860280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.860294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.860310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:115872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.172 [2024-07-13 06:19:46.860323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.860338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.172 [2024-07-13 06:19:46.860356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.860372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.860387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.860402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.172 [2024-07-13 06:19:46.860416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.860431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.860445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.860461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.172 [2024-07-13 06:19:46.860475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.172 [2024-07-13 06:19:46.860490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.172 [2024-07-13 06:19:46.860504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.860519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.173 [2024-07-13 06:19:46.860533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.860549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.173 [2024-07-13 06:19:46.860563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.860593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.173 [2024-07-13 06:19:46.860608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.860624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.173 [2024-07-13 06:19:46.860640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.860656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.173 [2024-07-13 06:19:46.860670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.860686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:115968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.173 [2024-07-13 06:19:46.860700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.860716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.173 [2024-07-13 06:19:46.860731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.860750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.173 [2024-07-13 06:19:46.860765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.860780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.173 [2024-07-13 06:19:46.860795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.860811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.173 [2024-07-13 06:19:46.860825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.860841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.173 [2024-07-13 06:19:46.860855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.860877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.173 [2024-07-13 06:19:46.860893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.860909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.173 [2024-07-13 06:19:46.860924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.860940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.173 [2024-07-13 06:19:46.860954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.860970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.173 [2024-07-13 06:19:46.860984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.173 [2024-07-13 06:19:46.861015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.173 [2024-07-13 06:19:46.861045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.173 [2024-07-13 06:19:46.861076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.173 [2024-07-13 06:19:46.861106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.173 [2024-07-13 06:19:46.861146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.173 [2024-07-13 06:19:46.861178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.173 [2024-07-13 06:19:46.861208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.173 [2024-07-13 06:19:46.861238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.173 [2024-07-13 06:19:46.861269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.173 [2024-07-13 06:19:46.861299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.173 [2024-07-13 06:19:46.861329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.173 [2024-07-13 06:19:46.861359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.173 [2024-07-13 06:19:46.861389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.173 [2024-07-13 06:19:46.861419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.173 [2024-07-13 06:19:46.861449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.173 [2024-07-13 06:19:46.861479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.173 [2024-07-13 06:19:46.861509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.173 [2024-07-13 06:19:46.861543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.173 [2024-07-13 06:19:46.861559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.861573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.861589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.861604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.861620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.861639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.861655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.861670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.861686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.861700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.861715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.861730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.861746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.861760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.861776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:115488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.861790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.861806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.861820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.861836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.861850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.861873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.861889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.861905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.861927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.861944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.861959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.861974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.861988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:116176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.174 [2024-07-13 06:19:46.862018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.862049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.174 [2024-07-13 06:19:46.862078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:116200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.174 [2024-07-13 06:19:46.862109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.862143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.174 [2024-07-13 06:19:46.862174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.862204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.862234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.174 [2024-07-13 06:19:46.862265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.174 [2024-07-13 06:19:46.862295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.862329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.862360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.174 [2024-07-13 06:19:46.862390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:116280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.174 [2024-07-13 06:19:46.862420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.862450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:115640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.862480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:115648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.862511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.862540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.862571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.862601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.862635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.174 [2024-07-13 06:19:46.862651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.174 [2024-07-13 06:19:46.862666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.862682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.175 [2024-07-13 06:19:46.862697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.862716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:116296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.175 [2024-07-13 06:19:46.862731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.862746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.175 [2024-07-13 06:19:46.862761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.862776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.175 [2024-07-13 06:19:46.862791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.862806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.175 [2024-07-13 06:19:46.862821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.862836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.175 [2024-07-13 06:19:46.862851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.862873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.175 [2024-07-13 06:19:46.862890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.862906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:116344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.175 [2024-07-13 06:19:46.862921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.862936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.175 [2024-07-13 06:19:46.862951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.862967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.175 [2024-07-13 06:19:46.862981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.862997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.175 [2024-07-13 06:19:46.863011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.175 [2024-07-13 06:19:46.863042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.175 [2024-07-13 06:19:46.863072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.175 [2024-07-13 06:19:46.863106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.175 [2024-07-13 06:19:46.863138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.175 [2024-07-13 06:19:46.863169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.175 [2024-07-13 06:19:46.863199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.175 [2024-07-13 06:19:46.863229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.175 [2024-07-13 06:19:46.863259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.175 [2024-07-13 06:19:46.863289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.175 [2024-07-13 06:19:46.863321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.175 [2024-07-13 06:19:46.863352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.175 [2024-07-13 06:19:46.863383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.175 [2024-07-13 06:19:46.863414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.175 [2024-07-13 06:19:46.863444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.175 [2024-07-13 06:19:46.863474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.175 [2024-07-13 06:19:46.863508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.175 [2024-07-13 06:19:46.863538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.175 [2024-07-13 06:19:46.863568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd56600 is same with the state(5) to be set 00:24:55.175 [2024-07-13 06:19:46.863599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:55.175 [2024-07-13 06:19:46.863611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:55.175 [2024-07-13 06:19:46.863623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115856 len:8 PRP1 0x0 PRP2 0x0 00:24:55.175 [2024-07-13 06:19:46.863636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863700] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd56600 was disconnected and freed. reset controller. 00:24:55.175 [2024-07-13 06:19:46.863727] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:55.175 [2024-07-13 06:19:46.863762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.175 [2024-07-13 06:19:46.863781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.175 [2024-07-13 06:19:46.863810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.175 [2024-07-13 06:19:46.863824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.175 [2024-07-13 06:19:46.863838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:46.863851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.176 [2024-07-13 06:19:46.863871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:46.863887] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.176 [2024-07-13 06:19:46.866105] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.176 [2024-07-13 06:19:46.866143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd37bd0 (9): Bad file descriptor 00:24:55.176 [2024-07-13 06:19:46.899703] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:55.176 [2024-07-13 06:19:50.472471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.472516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.472553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.472570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.472588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.472603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.472619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.472633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.472648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.472662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.472678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.472692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.472708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.472722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.472737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.472752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.472767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.472781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.472796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.472810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.472825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.472839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.472854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.472891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.472910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.472925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.472940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.472963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.472980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.472994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.473010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.473024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.473040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.473056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.473072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.473086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.473101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.473116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.473131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.473146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.473161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.473191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.473207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.473221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.473236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.473250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.473265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.473279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.473295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.473309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.473324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.473338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.473357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.473372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.473388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.473402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.473418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.473432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.473447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.473461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.473476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.176 [2024-07-13 06:19:50.473491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.176 [2024-07-13 06:19:50.473506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.473520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.473536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.473550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.473565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.473579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.473594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.473608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.473623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.473638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.473653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.473667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.473683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.473697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.473712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.473729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.473746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.177 [2024-07-13 06:19:50.473761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.473776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.473790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.473805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.177 [2024-07-13 06:19:50.473819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.473834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.177 [2024-07-13 06:19:50.473848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.473863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.473899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.473916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.473930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.473946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.473961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.473976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.473991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.474007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.474022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.474038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.474053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.474069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.474084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.474099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.474114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.474129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.474147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.474164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.474193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.474210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.474224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.474239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.474253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.474268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.177 [2024-07-13 06:19:50.474283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.474298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.177 [2024-07-13 06:19:50.474312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.474327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.177 [2024-07-13 06:19:50.474341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.474356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.177 [2024-07-13 06:19:50.474370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.474385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.474399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.474414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.474428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.474443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.177 [2024-07-13 06:19:50.474457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.177 [2024-07-13 06:19:50.474472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.178 [2024-07-13 06:19:50.474486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.474501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.474515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.474534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.474549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.474564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.474579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.474594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.178 [2024-07-13 06:19:50.474608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.474622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.474636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.474651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.474666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.474681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.474695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.474711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.474725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.474740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.474754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.474769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.474783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.474798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.474813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.474828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.474841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.474856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.474894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.474912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.474931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.474946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.474961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.474976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.474991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.475006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.178 [2024-07-13 06:19:50.475021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.475036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.475051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.475066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.475081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.475097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.178 [2024-07-13 06:19:50.475112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.475127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.178 [2024-07-13 06:19:50.475142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.475157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.475187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.475204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.178 [2024-07-13 06:19:50.475218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.475233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:104000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.475263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.475279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.178 [2024-07-13 06:19:50.475294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.475310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.475324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.475344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.178 [2024-07-13 06:19:50.475360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.475375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.475390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.475406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.475420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.475436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.178 [2024-07-13 06:19:50.475451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.475466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.178 [2024-07-13 06:19:50.475481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.475496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.178 [2024-07-13 06:19:50.475511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.178 [2024-07-13 06:19:50.475527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.179 [2024-07-13 06:19:50.475541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.475558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.475572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.475588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.475603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.475618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.475633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.475648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.475663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.475679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.475693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.475709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.475727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.475743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.475757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.475773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.475788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.475804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.475818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.475834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:104088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.475848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.475869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.179 [2024-07-13 06:19:50.475885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.475902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:104104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.475916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.475932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.179 [2024-07-13 06:19:50.475947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.475962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.475977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.475993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.179 [2024-07-13 06:19:50.476018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.476033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.179 [2024-07-13 06:19:50.476048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.476064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.476078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.476094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.476108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.476128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.179 [2024-07-13 06:19:50.476143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.476158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.179 [2024-07-13 06:19:50.476173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.476188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:104176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.476203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.476218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.476233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.476248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.179 [2024-07-13 06:19:50.476263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.476279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:104200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.476293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.476309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.476324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.476339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.476353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.476369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.476383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.476398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.476413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.476428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.476443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.476458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.476472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.476487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.179 [2024-07-13 06:19:50.476505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.476520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd44050 is same with the state(5) to be set 00:24:55.179 [2024-07-13 06:19:50.476537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:55.179 [2024-07-13 06:19:50.476549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:55.179 [2024-07-13 06:19:50.476561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103744 len:8 PRP1 0x0 PRP2 0x0 00:24:55.179 [2024-07-13 06:19:50.476574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.179 [2024-07-13 06:19:50.476640] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd44050 was disconnected and freed. reset controller. 00:24:55.179 [2024-07-13 06:19:50.476660] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:55.179 [2024-07-13 06:19:50.476692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.179 [2024-07-13 06:19:50.476710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:50.476726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.180 [2024-07-13 06:19:50.476740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:50.476753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.180 [2024-07-13 06:19:50.476766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:50.476780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.180 [2024-07-13 06:19:50.476793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:50.476806] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.180 [2024-07-13 06:19:50.478852] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.180 [2024-07-13 06:19:50.478899] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd37bd0 (9): Bad file descriptor 00:24:55.180 [2024-07-13 06:19:50.507616] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:55.180 [2024-07-13 06:19:55.011318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:54352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.011970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.011985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.012001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.012015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.012031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.012046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.012062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:54448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.012076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.012092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.012107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.012123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.012138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.012154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.012169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.012200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.012214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.012230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.012244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.012259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.012273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.012292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.180 [2024-07-13 06:19:55.012308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.180 [2024-07-13 06:19:55.012323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:54488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:54496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:54512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:54544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.012982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.012997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.013012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.013027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.013043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.013057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.013077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.013092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.013109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:55168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.181 [2024-07-13 06:19:55.013123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.013139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.013153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.013169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.013198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.013214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.013228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.013243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.013257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.013272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.013286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.013302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:55216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.181 [2024-07-13 06:19:55.013316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.013331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.181 [2024-07-13 06:19:55.013345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.013360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:55232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.181 [2024-07-13 06:19:55.013374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.181 [2024-07-13 06:19:55.013390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.013405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.013434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.013464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.013498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.013528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.013557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.013586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.013616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:54704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.013645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.013675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.013705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.013735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.182 [2024-07-13 06:19:55.013764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.013794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.013823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.013878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.013912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:54736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.013943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.013973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.013989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.014004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.014020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.014034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.014050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.014065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.014080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.182 [2024-07-13 06:19:55.014095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.014111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.182 [2024-07-13 06:19:55.014125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.014141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.182 [2024-07-13 06:19:55.014156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.014172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.014186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.014203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.014218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.014234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.182 [2024-07-13 06:19:55.014249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.182 [2024-07-13 06:19:55.014269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:55352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.183 [2024-07-13 06:19:55.014346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:55368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.183 [2024-07-13 06:19:55.014409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.183 [2024-07-13 06:19:55.014501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:55400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.183 [2024-07-13 06:19:55.014532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.014976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.014991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.183 [2024-07-13 06:19:55.015005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.015021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.015035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.015055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.183 [2024-07-13 06:19:55.015070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.015086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.015100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.015116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:55.183 [2024-07-13 06:19:55.015131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.015146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.015161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.015176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.015191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.015207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.015221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.015237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.015251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.183 [2024-07-13 06:19:55.015267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.183 [2024-07-13 06:19:55.015282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.184 [2024-07-13 06:19:55.015298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.184 [2024-07-13 06:19:55.015313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.184 [2024-07-13 06:19:55.015329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:55.184 [2024-07-13 06:19:55.015343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.184 [2024-07-13 06:19:55.015358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd64540 is same with the state(5) to be set 00:24:55.184 [2024-07-13 06:19:55.015375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:55.184 [2024-07-13 06:19:55.015386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:55.184 [2024-07-13 06:19:55.015405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55128 len:8 PRP1 0x0 PRP2 0x0 00:24:55.184 [2024-07-13 06:19:55.015419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.184 [2024-07-13 06:19:55.015484] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd64540 was disconnected and freed. reset controller. 00:24:55.184 [2024-07-13 06:19:55.015505] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:55.184 [2024-07-13 06:19:55.015542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.184 [2024-07-13 06:19:55.015561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.184 [2024-07-13 06:19:55.015576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.184 [2024-07-13 06:19:55.015590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.184 [2024-07-13 06:19:55.015604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.184 [2024-07-13 06:19:55.015617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.184 [2024-07-13 06:19:55.015631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.184 [2024-07-13 06:19:55.015644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.184 [2024-07-13 06:19:55.015658] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.184 [2024-07-13 06:19:55.017841] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.184 [2024-07-13 06:19:55.017888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd37bd0 (9): Bad file descriptor 00:24:55.184 [2024-07-13 06:19:55.049339] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:55.184 00:24:55.184 Latency(us) 00:24:55.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.184 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:55.184 Verification LBA range: start 0x0 length 0x4000 00:24:55.184 NVMe0n1 : 15.01 12787.50 49.95 345.79 0.00 9729.28 588.61 15146.10 00:24:55.184 =================================================================================================================== 00:24:55.184 Total : 12787.50 49.95 345.79 0.00 9729.28 588.61 15146.10 00:24:55.184 Received shutdown signal, test time was about 15.000000 seconds 00:24:55.184 00:24:55.184 Latency(us) 00:24:55.184 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.184 =================================================================================================================== 00:24:55.184 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:55.184 06:20:01 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:55.184 06:20:01 -- host/failover.sh@65 -- # count=3 00:24:55.184 06:20:01 -- host/failover.sh@67 -- # (( count != 3 )) 00:24:55.184 06:20:01 -- host/failover.sh@73 -- # bdevperf_pid=1214084 00:24:55.184 06:20:01 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:55.184 06:20:01 -- host/failover.sh@75 -- # waitforlisten 1214084 /var/tmp/bdevperf.sock 00:24:55.184 06:20:01 -- common/autotest_common.sh@819 -- # '[' -z 1214084 ']' 00:24:55.184 06:20:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:55.184 06:20:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:55.184 06:20:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:55.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:55.184 06:20:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:55.184 06:20:01 -- common/autotest_common.sh@10 -- # set +x 00:24:55.750 06:20:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:55.750 06:20:02 -- common/autotest_common.sh@852 -- # return 0 00:24:55.750 06:20:02 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:56.007 [2024-07-13 06:20:02.277713] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:56.007 06:20:02 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:56.007 [2024-07-13 06:20:02.514382] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:56.269 06:20:02 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:56.548 NVMe0n1 00:24:56.548 06:20:02 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:56.806 00:24:56.806 06:20:03 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:57.373 00:24:57.373 06:20:03 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:57.373 06:20:03 -- host/failover.sh@82 -- # grep -q NVMe0 00:24:57.632 06:20:03 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:57.633 06:20:04 -- host/failover.sh@87 -- # sleep 3 00:25:00.917 06:20:07 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:00.917 06:20:07 -- host/failover.sh@88 -- # grep -q NVMe0 00:25:00.917 06:20:07 -- host/failover.sh@90 -- # run_test_pid=1214810 00:25:00.917 06:20:07 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:00.917 06:20:07 -- host/failover.sh@92 -- # wait 1214810 00:25:02.294 0 00:25:02.294 06:20:08 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:02.294 [2024-07-13 06:20:01.128237] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:02.294 [2024-07-13 06:20:01.128330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214084 ] 00:25:02.294 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.294 [2024-07-13 06:20:01.187994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.294 [2024-07-13 06:20:01.291174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.294 [2024-07-13 06:20:04.107435] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:02.294 [2024-07-13 06:20:04.107521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.295 [2024-07-13 06:20:04.107543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.295 [2024-07-13 06:20:04.107574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.295 [2024-07-13 06:20:04.107589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.295 [2024-07-13 06:20:04.107603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.295 [2024-07-13 06:20:04.107617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.295 [2024-07-13 06:20:04.107631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:02.295 [2024-07-13 06:20:04.107644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:02.295 [2024-07-13 06:20:04.107658] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.295 [2024-07-13 06:20:04.107696] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.295 [2024-07-13 06:20:04.107726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56cbd0 (9): Bad file descriptor 00:25:02.295 [2024-07-13 06:20:04.112749] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:02.295 Running I/O for 1 seconds... 00:25:02.295 00:25:02.295 Latency(us) 00:25:02.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.295 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:02.295 Verification LBA range: start 0x0 length 0x4000 00:25:02.295 NVMe0n1 : 1.01 12721.88 49.69 0.00 0.00 10014.03 1462.42 15922.82 00:25:02.295 =================================================================================================================== 00:25:02.295 Total : 12721.88 49.69 0.00 0.00 10014.03 1462.42 15922.82 00:25:02.295 06:20:08 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:02.295 06:20:08 -- host/failover.sh@95 -- # grep -q NVMe0 00:25:02.295 06:20:08 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:02.553 06:20:08 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:02.553 06:20:08 -- host/failover.sh@99 -- # grep -q NVMe0 00:25:02.811 06:20:09 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:03.069 06:20:09 -- host/failover.sh@101 -- # sleep 3 00:25:06.359 06:20:12 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:06.359 06:20:12 -- host/failover.sh@103 -- # grep -q NVMe0 00:25:06.360 06:20:12 -- host/failover.sh@108 -- # killprocess 1214084 00:25:06.360 06:20:12 -- common/autotest_common.sh@926 -- # '[' -z 1214084 ']' 00:25:06.360 06:20:12 -- common/autotest_common.sh@930 -- # kill -0 1214084 00:25:06.360 06:20:12 -- common/autotest_common.sh@931 -- # uname 00:25:06.360 06:20:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:06.360 06:20:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1214084 00:25:06.360 06:20:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:06.360 06:20:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:06.360 06:20:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1214084' 00:25:06.360 killing process with pid 1214084 00:25:06.360 06:20:12 -- common/autotest_common.sh@945 -- # kill 1214084 00:25:06.360 06:20:12 -- common/autotest_common.sh@950 -- # wait 1214084 00:25:06.618 06:20:12 -- host/failover.sh@110 -- # sync 00:25:06.618 06:20:12 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.876 06:20:13 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:06.876 06:20:13 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:06.876 06:20:13 -- host/failover.sh@116 -- # nvmftestfini 00:25:06.876 06:20:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:06.876 06:20:13 -- nvmf/common.sh@116 -- # sync 00:25:06.876 06:20:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:06.876 06:20:13 -- nvmf/common.sh@119 -- # set +e 00:25:06.876 06:20:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:06.876 06:20:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:06.876 rmmod nvme_tcp 00:25:06.876 rmmod nvme_fabrics 00:25:06.876 rmmod nvme_keyring 00:25:06.876 06:20:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:06.876 06:20:13 -- nvmf/common.sh@123 -- # set -e 00:25:06.876 06:20:13 -- nvmf/common.sh@124 -- # return 0 00:25:06.876 06:20:13 -- nvmf/common.sh@477 -- # '[' -n 1211632 ']' 00:25:06.876 06:20:13 -- nvmf/common.sh@478 -- # killprocess 1211632 00:25:06.876 06:20:13 -- common/autotest_common.sh@926 -- # '[' -z 1211632 ']' 00:25:06.876 06:20:13 -- common/autotest_common.sh@930 -- # kill -0 1211632 00:25:06.876 06:20:13 -- common/autotest_common.sh@931 -- # uname 00:25:06.876 06:20:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:06.876 06:20:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1211632 00:25:06.876 06:20:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:06.876 06:20:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:06.876 06:20:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1211632' 00:25:06.876 killing process with pid 1211632 00:25:06.876 06:20:13 -- common/autotest_common.sh@945 -- # kill 1211632 00:25:06.876 06:20:13 -- common/autotest_common.sh@950 -- # wait 1211632 00:25:07.136 06:20:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:07.136 06:20:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:07.136 06:20:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:07.136 06:20:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:07.136 06:20:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:07.136 06:20:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.136 06:20:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:07.136 06:20:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.674 06:20:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:09.674 00:25:09.674 real 0m36.696s 00:25:09.674 user 2m5.850s 00:25:09.674 sys 0m7.635s 00:25:09.674 06:20:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:09.674 06:20:15 -- common/autotest_common.sh@10 -- # set +x 00:25:09.674 ************************************ 00:25:09.674 END TEST nvmf_failover 00:25:09.674 ************************************ 00:25:09.674 06:20:15 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:09.674 06:20:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:09.674 06:20:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:09.674 06:20:15 -- common/autotest_common.sh@10 -- # set +x 00:25:09.674 ************************************ 00:25:09.674 START TEST nvmf_discovery 00:25:09.674 ************************************ 00:25:09.674 06:20:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:09.674 * Looking for test storage... 00:25:09.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.674 06:20:15 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.674 06:20:15 -- nvmf/common.sh@7 -- # uname -s 00:25:09.674 06:20:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.674 06:20:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.674 06:20:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.674 06:20:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.674 06:20:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.674 06:20:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.674 06:20:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.674 06:20:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.674 06:20:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.674 06:20:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.674 06:20:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:09.674 06:20:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:09.674 06:20:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.674 06:20:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.674 06:20:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.674 06:20:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.674 06:20:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.674 06:20:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.674 06:20:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.674 06:20:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.674 06:20:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.674 06:20:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.674 06:20:15 -- paths/export.sh@5 -- # export PATH 00:25:09.674 06:20:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.674 06:20:15 -- nvmf/common.sh@46 -- # : 0 00:25:09.674 06:20:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:09.674 06:20:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:09.674 06:20:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:09.674 06:20:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.674 06:20:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.674 06:20:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:09.674 06:20:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:09.674 06:20:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:09.674 06:20:15 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:09.674 06:20:15 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:09.674 06:20:15 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:09.674 06:20:15 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:09.674 06:20:15 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:09.674 06:20:15 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:09.674 06:20:15 -- host/discovery.sh@25 -- # nvmftestinit 00:25:09.674 06:20:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:09.674 06:20:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.674 06:20:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:09.674 06:20:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:09.674 06:20:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:09.674 06:20:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.675 06:20:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.675 06:20:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.675 06:20:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:09.675 06:20:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:09.675 06:20:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:09.675 06:20:15 -- common/autotest_common.sh@10 -- # set +x 00:25:11.050 06:20:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:11.050 06:20:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:11.050 06:20:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:11.050 06:20:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:11.050 06:20:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:11.050 06:20:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:11.050 06:20:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:11.050 06:20:17 -- nvmf/common.sh@294 -- # net_devs=() 00:25:11.050 06:20:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:11.050 06:20:17 -- nvmf/common.sh@295 -- # e810=() 00:25:11.050 06:20:17 -- nvmf/common.sh@295 -- # local -ga e810 00:25:11.050 06:20:17 -- nvmf/common.sh@296 -- # x722=() 00:25:11.050 06:20:17 -- nvmf/common.sh@296 -- # local -ga x722 00:25:11.050 06:20:17 -- nvmf/common.sh@297 -- # mlx=() 00:25:11.050 06:20:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:11.050 06:20:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.050 06:20:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.050 06:20:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.050 06:20:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.050 06:20:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.050 06:20:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.050 06:20:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.050 06:20:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.050 06:20:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.050 06:20:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.050 06:20:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.050 06:20:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:11.050 06:20:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:11.050 06:20:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:11.050 06:20:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:11.050 06:20:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:11.050 06:20:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:11.050 06:20:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:11.050 06:20:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:11.050 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:11.050 06:20:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:11.050 06:20:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:11.050 06:20:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.050 06:20:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.050 06:20:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:11.050 06:20:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:11.050 06:20:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:11.050 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:11.050 06:20:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:11.050 06:20:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:11.050 06:20:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.050 06:20:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.050 06:20:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:11.050 06:20:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:11.050 06:20:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:11.050 06:20:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:11.050 06:20:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:11.050 06:20:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.050 06:20:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:11.050 06:20:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.050 06:20:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:11.050 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:11.050 06:20:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.050 06:20:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:11.050 06:20:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.050 06:20:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:11.050 06:20:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.050 06:20:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:11.050 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:11.050 06:20:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.050 06:20:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:11.050 06:20:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:11.050 06:20:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:11.050 06:20:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:11.050 06:20:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:11.050 06:20:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.050 06:20:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.050 06:20:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.050 06:20:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:11.050 06:20:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.050 06:20:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.050 06:20:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:11.050 06:20:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.050 06:20:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.050 06:20:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:11.050 06:20:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:11.050 06:20:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.050 06:20:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.308 06:20:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.308 06:20:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.308 06:20:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:11.308 06:20:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.308 06:20:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.308 06:20:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.308 06:20:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:11.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:25:11.308 00:25:11.308 --- 10.0.0.2 ping statistics --- 00:25:11.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.308 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:25:11.308 06:20:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:25:11.308 00:25:11.308 --- 10.0.0.1 ping statistics --- 00:25:11.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.308 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:25:11.308 06:20:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.308 06:20:17 -- nvmf/common.sh@410 -- # return 0 00:25:11.308 06:20:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:11.308 06:20:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.308 06:20:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:11.308 06:20:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:11.308 06:20:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.308 06:20:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:11.308 06:20:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:11.308 06:20:17 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:11.308 06:20:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:11.308 06:20:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:11.308 06:20:17 -- common/autotest_common.sh@10 -- # set +x 00:25:11.308 06:20:17 -- nvmf/common.sh@469 -- # nvmfpid=1217440 00:25:11.308 06:20:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:11.308 06:20:17 -- nvmf/common.sh@470 -- # waitforlisten 1217440 00:25:11.309 06:20:17 -- common/autotest_common.sh@819 -- # '[' -z 1217440 ']' 00:25:11.309 06:20:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.309 06:20:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:11.309 06:20:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.309 06:20:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:11.309 06:20:17 -- common/autotest_common.sh@10 -- # set +x 00:25:11.309 [2024-07-13 06:20:17.735895] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:11.309 [2024-07-13 06:20:17.735967] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.309 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.309 [2024-07-13 06:20:17.798671] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.567 [2024-07-13 06:20:17.903566] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:11.567 [2024-07-13 06:20:17.903732] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.567 [2024-07-13 06:20:17.903750] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.567 [2024-07-13 06:20:17.903762] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.567 [2024-07-13 06:20:17.903790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:12.502 06:20:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:12.502 06:20:18 -- common/autotest_common.sh@852 -- # return 0 00:25:12.502 06:20:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:12.502 06:20:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:12.502 06:20:18 -- common/autotest_common.sh@10 -- # set +x 00:25:12.502 06:20:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.502 06:20:18 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:12.502 06:20:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.502 06:20:18 -- common/autotest_common.sh@10 -- # set +x 00:25:12.502 [2024-07-13 06:20:18.731169] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.502 06:20:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.502 06:20:18 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:12.502 06:20:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.502 06:20:18 -- common/autotest_common.sh@10 -- # set +x 00:25:12.502 [2024-07-13 06:20:18.739343] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:12.502 06:20:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.502 06:20:18 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:12.502 06:20:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.502 06:20:18 -- common/autotest_common.sh@10 -- # set +x 00:25:12.502 null0 00:25:12.502 06:20:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.502 06:20:18 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:12.502 06:20:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.502 06:20:18 -- common/autotest_common.sh@10 -- # set +x 00:25:12.502 null1 00:25:12.502 06:20:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.502 06:20:18 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:12.502 06:20:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.502 06:20:18 -- common/autotest_common.sh@10 -- # set +x 00:25:12.502 06:20:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.502 06:20:18 -- host/discovery.sh@45 -- # hostpid=1217595 00:25:12.502 06:20:18 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:12.502 06:20:18 -- host/discovery.sh@46 -- # waitforlisten 1217595 /tmp/host.sock 00:25:12.502 06:20:18 -- common/autotest_common.sh@819 -- # '[' -z 1217595 ']' 00:25:12.502 06:20:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:25:12.502 06:20:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:12.502 06:20:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:12.502 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:12.502 06:20:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:12.502 06:20:18 -- common/autotest_common.sh@10 -- # set +x 00:25:12.502 [2024-07-13 06:20:18.810038] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:12.502 [2024-07-13 06:20:18.810114] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217595 ] 00:25:12.502 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.502 [2024-07-13 06:20:18.875824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.502 [2024-07-13 06:20:18.989092] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:12.502 [2024-07-13 06:20:18.989282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.436 06:20:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:13.436 06:20:19 -- common/autotest_common.sh@852 -- # return 0 00:25:13.436 06:20:19 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:13.436 06:20:19 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:13.436 06:20:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.436 06:20:19 -- common/autotest_common.sh@10 -- # set +x 00:25:13.436 06:20:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.436 06:20:19 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:13.436 06:20:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.436 06:20:19 -- common/autotest_common.sh@10 -- # set +x 00:25:13.436 06:20:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.436 06:20:19 -- host/discovery.sh@72 -- # notify_id=0 00:25:13.436 06:20:19 -- host/discovery.sh@78 -- # get_subsystem_names 00:25:13.436 06:20:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:13.436 06:20:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.436 06:20:19 -- common/autotest_common.sh@10 -- # set +x 00:25:13.436 06:20:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:13.436 06:20:19 -- host/discovery.sh@59 -- # sort 00:25:13.436 06:20:19 -- host/discovery.sh@59 -- # xargs 00:25:13.436 06:20:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.436 06:20:19 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:25:13.436 06:20:19 -- host/discovery.sh@79 -- # get_bdev_list 00:25:13.436 06:20:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.436 06:20:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:13.436 06:20:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.436 06:20:19 -- common/autotest_common.sh@10 -- # set +x 00:25:13.436 06:20:19 -- host/discovery.sh@55 -- # sort 00:25:13.436 06:20:19 -- host/discovery.sh@55 -- # xargs 00:25:13.436 06:20:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.436 06:20:19 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:25:13.436 06:20:19 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:13.436 06:20:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.436 06:20:19 -- common/autotest_common.sh@10 -- # set +x 00:25:13.436 06:20:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.436 06:20:19 -- host/discovery.sh@82 -- # get_subsystem_names 00:25:13.436 06:20:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:13.436 06:20:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.436 06:20:19 -- common/autotest_common.sh@10 -- # set +x 00:25:13.436 06:20:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:13.436 06:20:19 -- host/discovery.sh@59 -- # sort 00:25:13.436 06:20:19 -- host/discovery.sh@59 -- # xargs 00:25:13.436 06:20:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.436 06:20:19 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:25:13.436 06:20:19 -- host/discovery.sh@83 -- # get_bdev_list 00:25:13.436 06:20:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.436 06:20:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.436 06:20:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:13.436 06:20:19 -- common/autotest_common.sh@10 -- # set +x 00:25:13.436 06:20:19 -- host/discovery.sh@55 -- # sort 00:25:13.436 06:20:19 -- host/discovery.sh@55 -- # xargs 00:25:13.436 06:20:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.745 06:20:19 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:13.745 06:20:19 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:13.745 06:20:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.745 06:20:19 -- common/autotest_common.sh@10 -- # set +x 00:25:13.745 06:20:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.745 06:20:19 -- host/discovery.sh@86 -- # get_subsystem_names 00:25:13.745 06:20:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:13.745 06:20:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.745 06:20:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:13.745 06:20:19 -- common/autotest_common.sh@10 -- # set +x 00:25:13.745 06:20:19 -- host/discovery.sh@59 -- # sort 00:25:13.745 06:20:19 -- host/discovery.sh@59 -- # xargs 00:25:13.745 06:20:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.745 06:20:20 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:25:13.745 06:20:20 -- host/discovery.sh@87 -- # get_bdev_list 00:25:13.745 06:20:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.745 06:20:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.745 06:20:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:13.745 06:20:20 -- common/autotest_common.sh@10 -- # set +x 00:25:13.745 06:20:20 -- host/discovery.sh@55 -- # sort 00:25:13.745 06:20:20 -- host/discovery.sh@55 -- # xargs 00:25:13.745 06:20:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.745 06:20:20 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:13.745 06:20:20 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:13.745 06:20:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.745 06:20:20 -- common/autotest_common.sh@10 -- # set +x 00:25:13.745 [2024-07-13 06:20:20.055039] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.745 06:20:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.745 06:20:20 -- host/discovery.sh@92 -- # get_subsystem_names 00:25:13.745 06:20:20 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:13.745 06:20:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.745 06:20:20 -- common/autotest_common.sh@10 -- # set +x 00:25:13.745 06:20:20 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:13.745 06:20:20 -- host/discovery.sh@59 -- # sort 00:25:13.745 06:20:20 -- host/discovery.sh@59 -- # xargs 00:25:13.745 06:20:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.745 06:20:20 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:13.745 06:20:20 -- host/discovery.sh@93 -- # get_bdev_list 00:25:13.745 06:20:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:13.745 06:20:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.745 06:20:20 -- common/autotest_common.sh@10 -- # set +x 00:25:13.745 06:20:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:13.745 06:20:20 -- host/discovery.sh@55 -- # sort 00:25:13.745 06:20:20 -- host/discovery.sh@55 -- # xargs 00:25:13.745 06:20:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.745 06:20:20 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:25:13.745 06:20:20 -- host/discovery.sh@94 -- # get_notification_count 00:25:13.745 06:20:20 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:13.745 06:20:20 -- host/discovery.sh@74 -- # jq '. | length' 00:25:13.745 06:20:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.745 06:20:20 -- common/autotest_common.sh@10 -- # set +x 00:25:13.745 06:20:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.745 06:20:20 -- host/discovery.sh@74 -- # notification_count=0 00:25:13.745 06:20:20 -- host/discovery.sh@75 -- # notify_id=0 00:25:13.745 06:20:20 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:25:13.745 06:20:20 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:13.745 06:20:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:13.745 06:20:20 -- common/autotest_common.sh@10 -- # set +x 00:25:13.745 06:20:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:13.745 06:20:20 -- host/discovery.sh@100 -- # sleep 1 00:25:14.318 [2024-07-13 06:20:20.800359] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:14.318 [2024-07-13 06:20:20.800389] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:14.318 [2024-07-13 06:20:20.800412] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:14.595 [2024-07-13 06:20:20.886684] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:14.595 [2024-07-13 06:20:20.951521] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:14.595 [2024-07-13 06:20:20.951549] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:14.863 06:20:21 -- host/discovery.sh@101 -- # get_subsystem_names 00:25:14.863 06:20:21 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:14.863 06:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.863 06:20:21 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:14.863 06:20:21 -- common/autotest_common.sh@10 -- # set +x 00:25:14.863 06:20:21 -- host/discovery.sh@59 -- # sort 00:25:14.863 06:20:21 -- host/discovery.sh@59 -- # xargs 00:25:14.863 06:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.863 06:20:21 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.863 06:20:21 -- host/discovery.sh@102 -- # get_bdev_list 00:25:14.863 06:20:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:14.863 06:20:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:14.863 06:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.863 06:20:21 -- host/discovery.sh@55 -- # sort 00:25:14.863 06:20:21 -- common/autotest_common.sh@10 -- # set +x 00:25:14.863 06:20:21 -- host/discovery.sh@55 -- # xargs 00:25:14.863 06:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.863 06:20:21 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:14.863 06:20:21 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:25:14.863 06:20:21 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:14.863 06:20:21 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:14.863 06:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.863 06:20:21 -- common/autotest_common.sh@10 -- # set +x 00:25:14.863 06:20:21 -- host/discovery.sh@63 -- # sort -n 00:25:14.863 06:20:21 -- host/discovery.sh@63 -- # xargs 00:25:14.863 06:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.863 06:20:21 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:25:14.863 06:20:21 -- host/discovery.sh@104 -- # get_notification_count 00:25:14.863 06:20:21 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:14.863 06:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.863 06:20:21 -- host/discovery.sh@74 -- # jq '. | length' 00:25:14.863 06:20:21 -- common/autotest_common.sh@10 -- # set +x 00:25:14.863 06:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.863 06:20:21 -- host/discovery.sh@74 -- # notification_count=1 00:25:14.863 06:20:21 -- host/discovery.sh@75 -- # notify_id=1 00:25:14.863 06:20:21 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:25:14.863 06:20:21 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:14.863 06:20:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.863 06:20:21 -- common/autotest_common.sh@10 -- # set +x 00:25:14.863 06:20:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.863 06:20:21 -- host/discovery.sh@109 -- # sleep 1 00:25:16.244 06:20:22 -- host/discovery.sh@110 -- # get_bdev_list 00:25:16.245 06:20:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:16.245 06:20:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:16.245 06:20:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:16.245 06:20:22 -- common/autotest_common.sh@10 -- # set +x 00:25:16.245 06:20:22 -- host/discovery.sh@55 -- # sort 00:25:16.245 06:20:22 -- host/discovery.sh@55 -- # xargs 00:25:16.245 06:20:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:16.245 06:20:22 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:16.245 06:20:22 -- host/discovery.sh@111 -- # get_notification_count 00:25:16.245 06:20:22 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:16.245 06:20:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:16.245 06:20:22 -- host/discovery.sh@74 -- # jq '. | length' 00:25:16.245 06:20:22 -- common/autotest_common.sh@10 -- # set +x 00:25:16.245 06:20:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:16.245 06:20:22 -- host/discovery.sh@74 -- # notification_count=1 00:25:16.245 06:20:22 -- host/discovery.sh@75 -- # notify_id=2 00:25:16.245 06:20:22 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:25:16.245 06:20:22 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:16.245 06:20:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:16.245 06:20:22 -- common/autotest_common.sh@10 -- # set +x 00:25:16.245 [2024-07-13 06:20:22.458116] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:16.245 [2024-07-13 06:20:22.458621] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:16.245 [2024-07-13 06:20:22.458666] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:16.245 06:20:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:16.245 06:20:22 -- host/discovery.sh@117 -- # sleep 1 00:25:16.245 [2024-07-13 06:20:22.586039] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:16.245 [2024-07-13 06:20:22.646604] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:16.245 [2024-07-13 06:20:22.646631] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:16.245 [2024-07-13 06:20:22.646642] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:17.182 06:20:23 -- host/discovery.sh@118 -- # get_subsystem_names 00:25:17.182 06:20:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:17.182 06:20:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.182 06:20:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:17.182 06:20:23 -- common/autotest_common.sh@10 -- # set +x 00:25:17.182 06:20:23 -- host/discovery.sh@59 -- # sort 00:25:17.182 06:20:23 -- host/discovery.sh@59 -- # xargs 00:25:17.182 06:20:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.182 06:20:23 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.182 06:20:23 -- host/discovery.sh@119 -- # get_bdev_list 00:25:17.182 06:20:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:17.182 06:20:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:17.182 06:20:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.182 06:20:23 -- common/autotest_common.sh@10 -- # set +x 00:25:17.182 06:20:23 -- host/discovery.sh@55 -- # sort 00:25:17.182 06:20:23 -- host/discovery.sh@55 -- # xargs 00:25:17.182 06:20:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.182 06:20:23 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:17.182 06:20:23 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:25:17.182 06:20:23 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:17.182 06:20:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.182 06:20:23 -- common/autotest_common.sh@10 -- # set +x 00:25:17.182 06:20:23 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:17.182 06:20:23 -- host/discovery.sh@63 -- # sort -n 00:25:17.182 06:20:23 -- host/discovery.sh@63 -- # xargs 00:25:17.182 06:20:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.182 06:20:23 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:17.182 06:20:23 -- host/discovery.sh@121 -- # get_notification_count 00:25:17.182 06:20:23 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:17.182 06:20:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.182 06:20:23 -- host/discovery.sh@74 -- # jq '. | length' 00:25:17.182 06:20:23 -- common/autotest_common.sh@10 -- # set +x 00:25:17.182 06:20:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.182 06:20:23 -- host/discovery.sh@74 -- # notification_count=0 00:25:17.182 06:20:23 -- host/discovery.sh@75 -- # notify_id=2 00:25:17.182 06:20:23 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:25:17.182 06:20:23 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:17.182 06:20:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.182 06:20:23 -- common/autotest_common.sh@10 -- # set +x 00:25:17.182 [2024-07-13 06:20:23.626318] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:17.182 [2024-07-13 06:20:23.626352] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:17.182 06:20:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.182 06:20:23 -- host/discovery.sh@127 -- # sleep 1 00:25:17.182 [2024-07-13 06:20:23.634728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.183 [2024-07-13 06:20:23.634760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.183 [2024-07-13 06:20:23.634780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.183 [2024-07-13 06:20:23.634795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.183 [2024-07-13 06:20:23.634811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.183 [2024-07-13 06:20:23.634826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.183 [2024-07-13 06:20:23.634842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.183 [2024-07-13 06:20:23.634871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.183 [2024-07-13 06:20:23.634888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e66b80 is same with the state(5) to be set 00:25:17.183 [2024-07-13 06:20:23.644734] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e66b80 (9): Bad file descriptor 00:25:17.183 [2024-07-13 06:20:23.654794] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:17.183 [2024-07-13 06:20:23.655027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.183 [2024-07-13 06:20:23.655184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.183 [2024-07-13 06:20:23.655210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e66b80 with addr=10.0.0.2, port=4420 00:25:17.183 [2024-07-13 06:20:23.655247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e66b80 is same with the state(5) to be set 00:25:17.183 [2024-07-13 06:20:23.655272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e66b80 (9): Bad file descriptor 00:25:17.183 [2024-07-13 06:20:23.655296] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:17.183 [2024-07-13 06:20:23.655311] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:17.183 [2024-07-13 06:20:23.655338] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:17.183 [2024-07-13 06:20:23.655361] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.183 [2024-07-13 06:20:23.664879] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:17.183 [2024-07-13 06:20:23.665138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.183 [2024-07-13 06:20:23.665333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.183 [2024-07-13 06:20:23.665358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e66b80 with addr=10.0.0.2, port=4420 00:25:17.183 [2024-07-13 06:20:23.665373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e66b80 is same with the state(5) to be set 00:25:17.183 [2024-07-13 06:20:23.665409] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e66b80 (9): Bad file descriptor 00:25:17.183 [2024-07-13 06:20:23.665433] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:17.183 [2024-07-13 06:20:23.665448] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:17.183 [2024-07-13 06:20:23.665463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:17.183 [2024-07-13 06:20:23.665500] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.183 [2024-07-13 06:20:23.674966] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:17.183 [2024-07-13 06:20:23.675194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.183 [2024-07-13 06:20:23.675328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.183 [2024-07-13 06:20:23.675353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e66b80 with addr=10.0.0.2, port=4420 00:25:17.183 [2024-07-13 06:20:23.675368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e66b80 is same with the state(5) to be set 00:25:17.183 [2024-07-13 06:20:23.675390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e66b80 (9): Bad file descriptor 00:25:17.183 [2024-07-13 06:20:23.675427] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:17.183 [2024-07-13 06:20:23.675442] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:17.183 [2024-07-13 06:20:23.675457] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:17.183 [2024-07-13 06:20:23.675478] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.183 [2024-07-13 06:20:23.685053] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:17.183 [2024-07-13 06:20:23.685320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.183 [2024-07-13 06:20:23.685497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.183 [2024-07-13 06:20:23.685522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e66b80 with addr=10.0.0.2, port=4420 00:25:17.183 [2024-07-13 06:20:23.685538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e66b80 is same with the state(5) to be set 00:25:17.183 [2024-07-13 06:20:23.685559] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e66b80 (9): Bad file descriptor 00:25:17.183 [2024-07-13 06:20:23.685592] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:17.183 [2024-07-13 06:20:23.685610] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:17.183 [2024-07-13 06:20:23.685623] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:17.183 [2024-07-13 06:20:23.685660] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.442 [2024-07-13 06:20:23.695139] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:17.442 [2024-07-13 06:20:23.695383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.442 [2024-07-13 06:20:23.695579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.443 [2024-07-13 06:20:23.695609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e66b80 with addr=10.0.0.2, port=4420 00:25:17.443 [2024-07-13 06:20:23.695628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e66b80 is same with the state(5) to be set 00:25:17.443 [2024-07-13 06:20:23.695653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e66b80 (9): Bad file descriptor 00:25:17.443 [2024-07-13 06:20:23.695706] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:17.443 [2024-07-13 06:20:23.695729] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:17.443 [2024-07-13 06:20:23.695745] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:17.443 [2024-07-13 06:20:23.695774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.443 [2024-07-13 06:20:23.705246] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:17.443 [2024-07-13 06:20:23.705471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.443 [2024-07-13 06:20:23.705643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.443 [2024-07-13 06:20:23.705671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e66b80 with addr=10.0.0.2, port=4420 00:25:17.443 [2024-07-13 06:20:23.705688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e66b80 is same with the state(5) to be set 00:25:17.443 [2024-07-13 06:20:23.705712] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e66b80 (9): Bad file descriptor 00:25:17.443 [2024-07-13 06:20:23.705760] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:17.443 [2024-07-13 06:20:23.705780] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:17.443 [2024-07-13 06:20:23.705795] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:17.443 [2024-07-13 06:20:23.705816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.443 [2024-07-13 06:20:23.712773] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:17.443 [2024-07-13 06:20:23.712807] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:18.382 06:20:24 -- host/discovery.sh@128 -- # get_subsystem_names 00:25:18.382 06:20:24 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:18.382 06:20:24 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:18.382 06:20:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.382 06:20:24 -- host/discovery.sh@59 -- # sort 00:25:18.382 06:20:24 -- common/autotest_common.sh@10 -- # set +x 00:25:18.382 06:20:24 -- host/discovery.sh@59 -- # xargs 00:25:18.382 06:20:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.382 06:20:24 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.382 06:20:24 -- host/discovery.sh@129 -- # get_bdev_list 00:25:18.382 06:20:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:18.382 06:20:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:18.382 06:20:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.382 06:20:24 -- common/autotest_common.sh@10 -- # set +x 00:25:18.382 06:20:24 -- host/discovery.sh@55 -- # sort 00:25:18.382 06:20:24 -- host/discovery.sh@55 -- # xargs 00:25:18.382 06:20:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.382 06:20:24 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:18.382 06:20:24 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:25:18.382 06:20:24 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:18.382 06:20:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.382 06:20:24 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:18.382 06:20:24 -- common/autotest_common.sh@10 -- # set +x 00:25:18.382 06:20:24 -- host/discovery.sh@63 -- # sort -n 00:25:18.382 06:20:24 -- host/discovery.sh@63 -- # xargs 00:25:18.382 06:20:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.382 06:20:24 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:25:18.382 06:20:24 -- host/discovery.sh@131 -- # get_notification_count 00:25:18.382 06:20:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:18.382 06:20:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.382 06:20:24 -- common/autotest_common.sh@10 -- # set +x 00:25:18.382 06:20:24 -- host/discovery.sh@74 -- # jq '. | length' 00:25:18.382 06:20:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.382 06:20:24 -- host/discovery.sh@74 -- # notification_count=0 00:25:18.382 06:20:24 -- host/discovery.sh@75 -- # notify_id=2 00:25:18.382 06:20:24 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:25:18.382 06:20:24 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:18.382 06:20:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.382 06:20:24 -- common/autotest_common.sh@10 -- # set +x 00:25:18.382 06:20:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.382 06:20:24 -- host/discovery.sh@135 -- # sleep 1 00:25:19.323 06:20:25 -- host/discovery.sh@136 -- # get_subsystem_names 00:25:19.323 06:20:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:19.323 06:20:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.323 06:20:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:19.323 06:20:25 -- common/autotest_common.sh@10 -- # set +x 00:25:19.323 06:20:25 -- host/discovery.sh@59 -- # sort 00:25:19.323 06:20:25 -- host/discovery.sh@59 -- # xargs 00:25:19.583 06:20:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.583 06:20:25 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:25:19.583 06:20:25 -- host/discovery.sh@137 -- # get_bdev_list 00:25:19.583 06:20:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:19.583 06:20:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.583 06:20:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:19.583 06:20:25 -- common/autotest_common.sh@10 -- # set +x 00:25:19.583 06:20:25 -- host/discovery.sh@55 -- # sort 00:25:19.583 06:20:25 -- host/discovery.sh@55 -- # xargs 00:25:19.583 06:20:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.583 06:20:25 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:25:19.583 06:20:25 -- host/discovery.sh@138 -- # get_notification_count 00:25:19.583 06:20:25 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:19.583 06:20:25 -- host/discovery.sh@74 -- # jq '. | length' 00:25:19.583 06:20:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.583 06:20:25 -- common/autotest_common.sh@10 -- # set +x 00:25:19.583 06:20:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.583 06:20:25 -- host/discovery.sh@74 -- # notification_count=2 00:25:19.583 06:20:25 -- host/discovery.sh@75 -- # notify_id=4 00:25:19.583 06:20:25 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:25:19.583 06:20:25 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:19.583 06:20:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.583 06:20:25 -- common/autotest_common.sh@10 -- # set +x 00:25:20.520 [2024-07-13 06:20:26.996628] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:20.520 [2024-07-13 06:20:26.996657] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:20.520 [2024-07-13 06:20:26.996683] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:20.778 [2024-07-13 06:20:27.124123] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:21.036 [2024-07-13 06:20:27.434165] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:21.036 [2024-07-13 06:20:27.434221] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:21.036 06:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.036 06:20:27 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:21.036 06:20:27 -- common/autotest_common.sh@640 -- # local es=0 00:25:21.036 06:20:27 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:21.036 06:20:27 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:21.036 06:20:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:21.036 06:20:27 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:21.036 06:20:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:21.036 06:20:27 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:21.036 06:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.036 06:20:27 -- common/autotest_common.sh@10 -- # set +x 00:25:21.036 request: 00:25:21.036 { 00:25:21.036 "name": "nvme", 00:25:21.036 "trtype": "tcp", 00:25:21.036 "traddr": "10.0.0.2", 00:25:21.036 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:21.036 "adrfam": "ipv4", 00:25:21.036 "trsvcid": "8009", 00:25:21.036 "wait_for_attach": true, 00:25:21.036 "method": "bdev_nvme_start_discovery", 00:25:21.036 "req_id": 1 00:25:21.036 } 00:25:21.036 Got JSON-RPC error response 00:25:21.036 response: 00:25:21.036 { 00:25:21.036 "code": -17, 00:25:21.036 "message": "File exists" 00:25:21.036 } 00:25:21.036 06:20:27 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:21.036 06:20:27 -- common/autotest_common.sh@643 -- # es=1 00:25:21.036 06:20:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:21.036 06:20:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:21.036 06:20:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:21.036 06:20:27 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:25:21.036 06:20:27 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:21.036 06:20:27 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:21.036 06:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.036 06:20:27 -- common/autotest_common.sh@10 -- # set +x 00:25:21.036 06:20:27 -- host/discovery.sh@67 -- # sort 00:25:21.036 06:20:27 -- host/discovery.sh@67 -- # xargs 00:25:21.036 06:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.036 06:20:27 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:25:21.036 06:20:27 -- host/discovery.sh@147 -- # get_bdev_list 00:25:21.036 06:20:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.036 06:20:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:21.036 06:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.036 06:20:27 -- common/autotest_common.sh@10 -- # set +x 00:25:21.036 06:20:27 -- host/discovery.sh@55 -- # sort 00:25:21.036 06:20:27 -- host/discovery.sh@55 -- # xargs 00:25:21.036 06:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.036 06:20:27 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:21.036 06:20:27 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:21.036 06:20:27 -- common/autotest_common.sh@640 -- # local es=0 00:25:21.036 06:20:27 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:21.036 06:20:27 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:21.036 06:20:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:21.036 06:20:27 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:21.036 06:20:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:21.036 06:20:27 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:21.036 06:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.036 06:20:27 -- common/autotest_common.sh@10 -- # set +x 00:25:21.036 request: 00:25:21.036 { 00:25:21.036 "name": "nvme_second", 00:25:21.036 "trtype": "tcp", 00:25:21.036 "traddr": "10.0.0.2", 00:25:21.036 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:21.036 "adrfam": "ipv4", 00:25:21.036 "trsvcid": "8009", 00:25:21.036 "wait_for_attach": true, 00:25:21.036 "method": "bdev_nvme_start_discovery", 00:25:21.036 "req_id": 1 00:25:21.036 } 00:25:21.037 Got JSON-RPC error response 00:25:21.037 response: 00:25:21.037 { 00:25:21.037 "code": -17, 00:25:21.037 "message": "File exists" 00:25:21.037 } 00:25:21.037 06:20:27 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:21.037 06:20:27 -- common/autotest_common.sh@643 -- # es=1 00:25:21.037 06:20:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:21.037 06:20:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:21.037 06:20:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:21.037 06:20:27 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:25:21.037 06:20:27 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:21.037 06:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.037 06:20:27 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:21.037 06:20:27 -- common/autotest_common.sh@10 -- # set +x 00:25:21.037 06:20:27 -- host/discovery.sh@67 -- # sort 00:25:21.037 06:20:27 -- host/discovery.sh@67 -- # xargs 00:25:21.295 06:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.295 06:20:27 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:25:21.295 06:20:27 -- host/discovery.sh@153 -- # get_bdev_list 00:25:21.295 06:20:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:21.295 06:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.295 06:20:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:21.295 06:20:27 -- common/autotest_common.sh@10 -- # set +x 00:25:21.295 06:20:27 -- host/discovery.sh@55 -- # sort 00:25:21.295 06:20:27 -- host/discovery.sh@55 -- # xargs 00:25:21.295 06:20:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.295 06:20:27 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:21.295 06:20:27 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:21.295 06:20:27 -- common/autotest_common.sh@640 -- # local es=0 00:25:21.295 06:20:27 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:21.295 06:20:27 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:21.295 06:20:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:21.295 06:20:27 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:21.295 06:20:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:21.295 06:20:27 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:21.295 06:20:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.295 06:20:27 -- common/autotest_common.sh@10 -- # set +x 00:25:22.234 [2024-07-13 06:20:28.629539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.234 [2024-07-13 06:20:28.629723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:22.234 [2024-07-13 06:20:28.629752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5c070 with addr=10.0.0.2, port=8010 00:25:22.234 [2024-07-13 06:20:28.629774] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:22.234 [2024-07-13 06:20:28.629789] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:22.234 [2024-07-13 06:20:28.629803] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:23.170 [2024-07-13 06:20:29.631988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.170 [2024-07-13 06:20:29.632149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:23.170 [2024-07-13 06:20:29.632175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e5c070 with addr=10.0.0.2, port=8010 00:25:23.170 [2024-07-13 06:20:29.632210] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:23.170 [2024-07-13 06:20:29.632224] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:23.170 [2024-07-13 06:20:29.632237] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:24.550 [2024-07-13 06:20:30.634235] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:24.550 request: 00:25:24.550 { 00:25:24.550 "name": "nvme_second", 00:25:24.550 "trtype": "tcp", 00:25:24.550 "traddr": "10.0.0.2", 00:25:24.550 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:24.550 "adrfam": "ipv4", 00:25:24.550 "trsvcid": "8010", 00:25:24.550 "attach_timeout_ms": 3000, 00:25:24.550 "method": "bdev_nvme_start_discovery", 00:25:24.550 "req_id": 1 00:25:24.550 } 00:25:24.550 Got JSON-RPC error response 00:25:24.550 response: 00:25:24.550 { 00:25:24.550 "code": -110, 00:25:24.550 "message": "Connection timed out" 00:25:24.550 } 00:25:24.550 06:20:30 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:24.550 06:20:30 -- common/autotest_common.sh@643 -- # es=1 00:25:24.550 06:20:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:24.550 06:20:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:24.550 06:20:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:24.550 06:20:30 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:25:24.550 06:20:30 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:24.550 06:20:30 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:24.550 06:20:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.550 06:20:30 -- common/autotest_common.sh@10 -- # set +x 00:25:24.550 06:20:30 -- host/discovery.sh@67 -- # sort 00:25:24.550 06:20:30 -- host/discovery.sh@67 -- # xargs 00:25:24.551 06:20:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.551 06:20:30 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:25:24.551 06:20:30 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:25:24.551 06:20:30 -- host/discovery.sh@162 -- # kill 1217595 00:25:24.551 06:20:30 -- host/discovery.sh@163 -- # nvmftestfini 00:25:24.551 06:20:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:24.551 06:20:30 -- nvmf/common.sh@116 -- # sync 00:25:24.551 06:20:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:24.551 06:20:30 -- nvmf/common.sh@119 -- # set +e 00:25:24.551 06:20:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:24.551 06:20:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:24.551 rmmod nvme_tcp 00:25:24.551 rmmod nvme_fabrics 00:25:24.551 rmmod nvme_keyring 00:25:24.551 06:20:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:24.551 06:20:30 -- nvmf/common.sh@123 -- # set -e 00:25:24.551 06:20:30 -- nvmf/common.sh@124 -- # return 0 00:25:24.551 06:20:30 -- nvmf/common.sh@477 -- # '[' -n 1217440 ']' 00:25:24.551 06:20:30 -- nvmf/common.sh@478 -- # killprocess 1217440 00:25:24.551 06:20:30 -- common/autotest_common.sh@926 -- # '[' -z 1217440 ']' 00:25:24.551 06:20:30 -- common/autotest_common.sh@930 -- # kill -0 1217440 00:25:24.551 06:20:30 -- common/autotest_common.sh@931 -- # uname 00:25:24.551 06:20:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:24.551 06:20:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1217440 00:25:24.551 06:20:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:24.551 06:20:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:24.551 06:20:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1217440' 00:25:24.551 killing process with pid 1217440 00:25:24.551 06:20:30 -- common/autotest_common.sh@945 -- # kill 1217440 00:25:24.551 06:20:30 -- common/autotest_common.sh@950 -- # wait 1217440 00:25:24.551 06:20:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:24.551 06:20:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:24.551 06:20:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:24.551 06:20:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:24.551 06:20:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:24.551 06:20:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.551 06:20:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.551 06:20:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.088 06:20:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:27.088 00:25:27.088 real 0m17.412s 00:25:27.088 user 0m27.300s 00:25:27.088 sys 0m2.808s 00:25:27.088 06:20:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:27.088 06:20:33 -- common/autotest_common.sh@10 -- # set +x 00:25:27.088 ************************************ 00:25:27.088 END TEST nvmf_discovery 00:25:27.088 ************************************ 00:25:27.088 06:20:33 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:27.088 06:20:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:27.088 06:20:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:27.088 06:20:33 -- common/autotest_common.sh@10 -- # set +x 00:25:27.088 ************************************ 00:25:27.088 START TEST nvmf_discovery_remove_ifc 00:25:27.088 ************************************ 00:25:27.088 06:20:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:27.088 * Looking for test storage... 00:25:27.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:27.088 06:20:33 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:27.088 06:20:33 -- nvmf/common.sh@7 -- # uname -s 00:25:27.088 06:20:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.088 06:20:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.088 06:20:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.088 06:20:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.089 06:20:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.089 06:20:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.089 06:20:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.089 06:20:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.089 06:20:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.089 06:20:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.089 06:20:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:27.089 06:20:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:27.089 06:20:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.089 06:20:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.089 06:20:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:27.089 06:20:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.089 06:20:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.089 06:20:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.089 06:20:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.089 06:20:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.089 06:20:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.089 06:20:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.089 06:20:33 -- paths/export.sh@5 -- # export PATH 00:25:27.089 06:20:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.089 06:20:33 -- nvmf/common.sh@46 -- # : 0 00:25:27.089 06:20:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:27.089 06:20:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:27.089 06:20:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:27.089 06:20:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.089 06:20:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.089 06:20:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:27.089 06:20:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:27.089 06:20:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:27.089 06:20:33 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:27.089 06:20:33 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:27.089 06:20:33 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:27.089 06:20:33 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:27.089 06:20:33 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:27.089 06:20:33 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:27.089 06:20:33 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:27.089 06:20:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:27.089 06:20:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.089 06:20:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:27.089 06:20:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:27.089 06:20:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:27.089 06:20:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.089 06:20:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.089 06:20:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.089 06:20:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:27.089 06:20:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:27.089 06:20:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:27.089 06:20:33 -- common/autotest_common.sh@10 -- # set +x 00:25:28.987 06:20:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:28.988 06:20:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:28.988 06:20:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:28.988 06:20:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:28.988 06:20:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:28.988 06:20:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:28.988 06:20:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:28.988 06:20:35 -- nvmf/common.sh@294 -- # net_devs=() 00:25:28.988 06:20:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:28.988 06:20:35 -- nvmf/common.sh@295 -- # e810=() 00:25:28.988 06:20:35 -- nvmf/common.sh@295 -- # local -ga e810 00:25:28.988 06:20:35 -- nvmf/common.sh@296 -- # x722=() 00:25:28.988 06:20:35 -- nvmf/common.sh@296 -- # local -ga x722 00:25:28.988 06:20:35 -- nvmf/common.sh@297 -- # mlx=() 00:25:28.988 06:20:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:28.988 06:20:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.988 06:20:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.988 06:20:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.988 06:20:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.988 06:20:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.988 06:20:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.988 06:20:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.988 06:20:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.988 06:20:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.988 06:20:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.988 06:20:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.988 06:20:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:28.988 06:20:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:28.988 06:20:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:28.988 06:20:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:28.988 06:20:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:28.988 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:28.988 06:20:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:28.988 06:20:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:28.988 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:28.988 06:20:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:28.988 06:20:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:28.988 06:20:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.988 06:20:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:28.988 06:20:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.988 06:20:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:28.988 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:28.988 06:20:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.988 06:20:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:28.988 06:20:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.988 06:20:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:28.988 06:20:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.988 06:20:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:28.988 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:28.988 06:20:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.988 06:20:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:28.988 06:20:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:28.988 06:20:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:28.988 06:20:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.988 06:20:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.988 06:20:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.988 06:20:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:28.988 06:20:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.988 06:20:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.988 06:20:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:28.988 06:20:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.988 06:20:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.988 06:20:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:28.988 06:20:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:28.988 06:20:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.988 06:20:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.988 06:20:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.988 06:20:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.988 06:20:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:28.988 06:20:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.988 06:20:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.988 06:20:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.988 06:20:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:28.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:25:28.988 00:25:28.988 --- 10.0.0.2 ping statistics --- 00:25:28.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.988 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:25:28.988 06:20:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:25:28.988 00:25:28.988 --- 10.0.0.1 ping statistics --- 00:25:28.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.988 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:25:28.988 06:20:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.988 06:20:35 -- nvmf/common.sh@410 -- # return 0 00:25:28.988 06:20:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:28.988 06:20:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.988 06:20:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:28.988 06:20:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.988 06:20:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:28.988 06:20:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:28.988 06:20:35 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:28.988 06:20:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:28.988 06:20:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:28.988 06:20:35 -- common/autotest_common.sh@10 -- # set +x 00:25:28.988 06:20:35 -- nvmf/common.sh@469 -- # nvmfpid=1221186 00:25:28.988 06:20:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:28.988 06:20:35 -- nvmf/common.sh@470 -- # waitforlisten 1221186 00:25:28.988 06:20:35 -- common/autotest_common.sh@819 -- # '[' -z 1221186 ']' 00:25:28.988 06:20:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.988 06:20:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:28.988 06:20:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.988 06:20:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:28.989 06:20:35 -- common/autotest_common.sh@10 -- # set +x 00:25:28.989 [2024-07-13 06:20:35.213922] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:28.989 [2024-07-13 06:20:35.213997] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:28.989 EAL: No free 2048 kB hugepages reported on node 1 00:25:28.989 [2024-07-13 06:20:35.282956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.989 [2024-07-13 06:20:35.396351] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:28.989 [2024-07-13 06:20:35.396517] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:28.989 [2024-07-13 06:20:35.396536] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:28.989 [2024-07-13 06:20:35.396551] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:28.989 [2024-07-13 06:20:35.396583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.923 06:20:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:29.923 06:20:36 -- common/autotest_common.sh@852 -- # return 0 00:25:29.923 06:20:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:29.923 06:20:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:29.923 06:20:36 -- common/autotest_common.sh@10 -- # set +x 00:25:29.923 06:20:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:29.923 06:20:36 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:29.923 06:20:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.923 06:20:36 -- common/autotest_common.sh@10 -- # set +x 00:25:29.923 [2024-07-13 06:20:36.176325] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:29.923 [2024-07-13 06:20:36.184493] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:29.923 null0 00:25:29.923 [2024-07-13 06:20:36.216461] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.923 06:20:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.923 06:20:36 -- host/discovery_remove_ifc.sh@59 -- # hostpid=1221342 00:25:29.923 06:20:36 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:29.923 06:20:36 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1221342 /tmp/host.sock 00:25:29.923 06:20:36 -- common/autotest_common.sh@819 -- # '[' -z 1221342 ']' 00:25:29.923 06:20:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:25:29.924 06:20:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:29.924 06:20:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:29.924 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:29.924 06:20:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:29.924 06:20:36 -- common/autotest_common.sh@10 -- # set +x 00:25:29.924 [2024-07-13 06:20:36.278943] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:29.924 [2024-07-13 06:20:36.279018] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221342 ] 00:25:29.924 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.924 [2024-07-13 06:20:36.346336] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.182 [2024-07-13 06:20:36.461396] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:30.182 [2024-07-13 06:20:36.461561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.785 06:20:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:30.785 06:20:37 -- common/autotest_common.sh@852 -- # return 0 00:25:30.785 06:20:37 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:30.785 06:20:37 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:30.785 06:20:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.785 06:20:37 -- common/autotest_common.sh@10 -- # set +x 00:25:30.785 06:20:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.785 06:20:37 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:30.785 06:20:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.785 06:20:37 -- common/autotest_common.sh@10 -- # set +x 00:25:31.048 06:20:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:31.048 06:20:37 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:31.048 06:20:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:31.048 06:20:37 -- common/autotest_common.sh@10 -- # set +x 00:25:31.982 [2024-07-13 06:20:38.376665] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:31.982 [2024-07-13 06:20:38.376699] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:31.982 [2024-07-13 06:20:38.376726] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:31.982 [2024-07-13 06:20:38.465025] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:32.240 [2024-07-13 06:20:38.566802] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:32.240 [2024-07-13 06:20:38.566858] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:32.240 [2024-07-13 06:20:38.566922] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:32.240 [2024-07-13 06:20:38.566947] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:32.240 [2024-07-13 06:20:38.566970] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:32.240 06:20:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.240 06:20:38 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:32.240 06:20:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:32.240 06:20:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.240 06:20:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:32.240 06:20:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.240 06:20:38 -- common/autotest_common.sh@10 -- # set +x 00:25:32.240 06:20:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:32.240 06:20:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:32.240 [2024-07-13 06:20:38.574278] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x24861f0 was disconnected and freed. delete nvme_qpair. 00:25:32.240 06:20:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.240 06:20:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:32.240 06:20:38 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:32.240 06:20:38 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:32.240 06:20:38 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:32.240 06:20:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:32.240 06:20:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.240 06:20:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:32.240 06:20:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:32.240 06:20:38 -- common/autotest_common.sh@10 -- # set +x 00:25:32.240 06:20:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:32.240 06:20:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:32.240 06:20:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:32.240 06:20:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:32.240 06:20:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:33.614 06:20:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:33.614 06:20:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:33.614 06:20:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:33.614 06:20:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.614 06:20:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:33.614 06:20:39 -- common/autotest_common.sh@10 -- # set +x 00:25:33.614 06:20:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:33.614 06:20:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.614 06:20:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:33.614 06:20:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:34.544 06:20:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:34.544 06:20:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.544 06:20:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:34.544 06:20:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:34.544 06:20:40 -- common/autotest_common.sh@10 -- # set +x 00:25:34.544 06:20:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:34.544 06:20:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:34.544 06:20:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:34.544 06:20:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:34.544 06:20:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:35.476 06:20:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:35.476 06:20:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:35.476 06:20:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:35.476 06:20:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:35.476 06:20:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:35.476 06:20:41 -- common/autotest_common.sh@10 -- # set +x 00:25:35.476 06:20:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:35.476 06:20:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:35.476 06:20:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:35.476 06:20:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:36.407 06:20:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:36.407 06:20:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:36.407 06:20:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:36.407 06:20:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.407 06:20:42 -- common/autotest_common.sh@10 -- # set +x 00:25:36.407 06:20:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:36.407 06:20:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:36.407 06:20:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.407 06:20:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:36.407 06:20:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:37.777 06:20:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:37.777 06:20:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:37.777 06:20:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:37.777 06:20:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:37.777 06:20:43 -- common/autotest_common.sh@10 -- # set +x 00:25:37.777 06:20:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:37.777 06:20:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:37.777 06:20:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:37.777 06:20:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:37.777 06:20:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:37.777 [2024-07-13 06:20:44.008026] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:37.777 [2024-07-13 06:20:44.008096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.777 [2024-07-13 06:20:44.008133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.777 [2024-07-13 06:20:44.008150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.777 [2024-07-13 06:20:44.008191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.777 [2024-07-13 06:20:44.008205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.777 [2024-07-13 06:20:44.008217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.777 [2024-07-13 06:20:44.008230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.777 [2024-07-13 06:20:44.008259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.777 [2024-07-13 06:20:44.008275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.777 [2024-07-13 06:20:44.008290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.777 [2024-07-13 06:20:44.008306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244c810 is same with the state(5) to be set 00:25:37.777 [2024-07-13 06:20:44.018045] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244c810 (9): Bad file descriptor 00:25:37.777 [2024-07-13 06:20:44.028092] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:38.708 06:20:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:38.708 06:20:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.708 06:20:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:38.708 06:20:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:38.708 06:20:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:38.708 06:20:44 -- common/autotest_common.sh@10 -- # set +x 00:25:38.708 06:20:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:38.708 [2024-07-13 06:20:45.085936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:39.641 [2024-07-13 06:20:46.109914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:39.641 [2024-07-13 06:20:46.109989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x244c810 with addr=10.0.0.2, port=4420 00:25:39.641 [2024-07-13 06:20:46.110017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244c810 is same with the state(5) to be set 00:25:39.641 [2024-07-13 06:20:46.110055] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:39.641 [2024-07-13 06:20:46.110075] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:39.641 [2024-07-13 06:20:46.110089] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:39.641 [2024-07-13 06:20:46.110113] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:39.641 [2024-07-13 06:20:46.110562] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244c810 (9): Bad file descriptor 00:25:39.641 [2024-07-13 06:20:46.110607] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:39.641 [2024-07-13 06:20:46.110657] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:39.641 [2024-07-13 06:20:46.110697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.641 [2024-07-13 06:20:46.110722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.641 [2024-07-13 06:20:46.110742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.641 [2024-07-13 06:20:46.110758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.641 [2024-07-13 06:20:46.110774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.641 [2024-07-13 06:20:46.110789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.641 [2024-07-13 06:20:46.110805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.641 [2024-07-13 06:20:46.110820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.641 [2024-07-13 06:20:46.110836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.641 [2024-07-13 06:20:46.110850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.641 [2024-07-13 06:20:46.110882] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:39.641 [2024-07-13 06:20:46.111054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244cc20 (9): Bad file descriptor 00:25:39.641 [2024-07-13 06:20:46.112071] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:39.641 [2024-07-13 06:20:46.112092] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:39.641 06:20:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.641 06:20:46 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:39.641 06:20:46 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:41.013 06:20:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:41.013 06:20:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.013 06:20:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:41.013 06:20:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.013 06:20:47 -- common/autotest_common.sh@10 -- # set +x 00:25:41.013 06:20:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:41.013 06:20:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:41.013 06:20:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.013 06:20:47 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:41.013 06:20:47 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.013 06:20:47 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.013 06:20:47 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:41.013 06:20:47 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:41.013 06:20:47 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.013 06:20:47 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:41.013 06:20:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.013 06:20:47 -- common/autotest_common.sh@10 -- # set +x 00:25:41.013 06:20:47 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:41.013 06:20:47 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:41.013 06:20:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.013 06:20:47 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:41.013 06:20:47 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:41.943 [2024-07-13 06:20:48.122136] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:41.943 [2024-07-13 06:20:48.122176] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:41.943 [2024-07-13 06:20:48.122204] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:41.943 [2024-07-13 06:20:48.210487] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:41.943 06:20:48 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:41.943 06:20:48 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.943 06:20:48 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:41.943 06:20:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.943 06:20:48 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:41.943 06:20:48 -- common/autotest_common.sh@10 -- # set +x 00:25:41.943 06:20:48 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:41.943 06:20:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.943 06:20:48 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:41.943 06:20:48 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:41.943 [2024-07-13 06:20:48.393873] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:41.943 [2024-07-13 06:20:48.393939] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:41.943 [2024-07-13 06:20:48.393973] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:41.943 [2024-07-13 06:20:48.393998] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:41.943 [2024-07-13 06:20:48.394012] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:41.943 [2024-07-13 06:20:48.400556] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x245b1d0 was disconnected and freed. delete nvme_qpair. 00:25:42.888 06:20:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:42.888 06:20:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.888 06:20:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:42.888 06:20:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.888 06:20:49 -- common/autotest_common.sh@10 -- # set +x 00:25:42.888 06:20:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:42.888 06:20:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:42.888 06:20:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.888 06:20:49 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:42.888 06:20:49 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:42.888 06:20:49 -- host/discovery_remove_ifc.sh@90 -- # killprocess 1221342 00:25:42.888 06:20:49 -- common/autotest_common.sh@926 -- # '[' -z 1221342 ']' 00:25:42.888 06:20:49 -- common/autotest_common.sh@930 -- # kill -0 1221342 00:25:42.888 06:20:49 -- common/autotest_common.sh@931 -- # uname 00:25:42.888 06:20:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:42.888 06:20:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1221342 00:25:42.888 06:20:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:42.888 06:20:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:42.888 06:20:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1221342' 00:25:42.888 killing process with pid 1221342 00:25:42.888 06:20:49 -- common/autotest_common.sh@945 -- # kill 1221342 00:25:42.888 06:20:49 -- common/autotest_common.sh@950 -- # wait 1221342 00:25:43.147 06:20:49 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:43.147 06:20:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:43.147 06:20:49 -- nvmf/common.sh@116 -- # sync 00:25:43.147 06:20:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:43.147 06:20:49 -- nvmf/common.sh@119 -- # set +e 00:25:43.147 06:20:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:43.147 06:20:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:43.147 rmmod nvme_tcp 00:25:43.147 rmmod nvme_fabrics 00:25:43.147 rmmod nvme_keyring 00:25:43.405 06:20:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:43.405 06:20:49 -- nvmf/common.sh@123 -- # set -e 00:25:43.405 06:20:49 -- nvmf/common.sh@124 -- # return 0 00:25:43.405 06:20:49 -- nvmf/common.sh@477 -- # '[' -n 1221186 ']' 00:25:43.405 06:20:49 -- nvmf/common.sh@478 -- # killprocess 1221186 00:25:43.405 06:20:49 -- common/autotest_common.sh@926 -- # '[' -z 1221186 ']' 00:25:43.405 06:20:49 -- common/autotest_common.sh@930 -- # kill -0 1221186 00:25:43.405 06:20:49 -- common/autotest_common.sh@931 -- # uname 00:25:43.405 06:20:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:43.405 06:20:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1221186 00:25:43.405 06:20:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:43.405 06:20:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:43.405 06:20:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1221186' 00:25:43.405 killing process with pid 1221186 00:25:43.405 06:20:49 -- common/autotest_common.sh@945 -- # kill 1221186 00:25:43.405 06:20:49 -- common/autotest_common.sh@950 -- # wait 1221186 00:25:43.664 06:20:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:43.664 06:20:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:43.664 06:20:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:43.664 06:20:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:43.664 06:20:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:43.664 06:20:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.664 06:20:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:43.664 06:20:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.570 06:20:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:45.570 00:25:45.570 real 0m18.926s 00:25:45.570 user 0m26.909s 00:25:45.570 sys 0m2.969s 00:25:45.570 06:20:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.570 06:20:52 -- common/autotest_common.sh@10 -- # set +x 00:25:45.570 ************************************ 00:25:45.570 END TEST nvmf_discovery_remove_ifc 00:25:45.570 ************************************ 00:25:45.570 06:20:52 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:25:45.570 06:20:52 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:45.570 06:20:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:45.570 06:20:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:45.570 06:20:52 -- common/autotest_common.sh@10 -- # set +x 00:25:45.570 ************************************ 00:25:45.570 START TEST nvmf_digest 00:25:45.570 ************************************ 00:25:45.570 06:20:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:45.829 * Looking for test storage... 00:25:45.829 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:45.829 06:20:52 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:45.829 06:20:52 -- nvmf/common.sh@7 -- # uname -s 00:25:45.829 06:20:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.829 06:20:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.829 06:20:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.829 06:20:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.829 06:20:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.829 06:20:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.829 06:20:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.829 06:20:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.829 06:20:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.829 06:20:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.829 06:20:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:45.829 06:20:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:45.829 06:20:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.829 06:20:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.829 06:20:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:45.829 06:20:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:45.829 06:20:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.829 06:20:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.829 06:20:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.829 06:20:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.829 06:20:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.829 06:20:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.829 06:20:52 -- paths/export.sh@5 -- # export PATH 00:25:45.829 06:20:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.829 06:20:52 -- nvmf/common.sh@46 -- # : 0 00:25:45.829 06:20:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:45.829 06:20:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:45.829 06:20:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:45.829 06:20:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.829 06:20:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.829 06:20:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:45.829 06:20:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:45.829 06:20:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:45.829 06:20:52 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:45.829 06:20:52 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:45.829 06:20:52 -- host/digest.sh@16 -- # runtime=2 00:25:45.829 06:20:52 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:25:45.829 06:20:52 -- host/digest.sh@132 -- # nvmftestinit 00:25:45.829 06:20:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:45.829 06:20:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.829 06:20:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:45.829 06:20:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:45.829 06:20:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:45.829 06:20:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.829 06:20:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:45.829 06:20:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.829 06:20:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:45.829 06:20:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:45.829 06:20:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:45.829 06:20:52 -- common/autotest_common.sh@10 -- # set +x 00:25:47.764 06:20:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:47.764 06:20:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:47.764 06:20:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:47.764 06:20:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:47.764 06:20:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:47.764 06:20:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:47.764 06:20:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:47.764 06:20:54 -- nvmf/common.sh@294 -- # net_devs=() 00:25:47.764 06:20:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:47.764 06:20:54 -- nvmf/common.sh@295 -- # e810=() 00:25:47.764 06:20:54 -- nvmf/common.sh@295 -- # local -ga e810 00:25:47.764 06:20:54 -- nvmf/common.sh@296 -- # x722=() 00:25:47.764 06:20:54 -- nvmf/common.sh@296 -- # local -ga x722 00:25:47.764 06:20:54 -- nvmf/common.sh@297 -- # mlx=() 00:25:47.764 06:20:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:47.764 06:20:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:47.764 06:20:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:47.764 06:20:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:47.764 06:20:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:47.764 06:20:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:47.764 06:20:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:47.764 06:20:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:47.764 06:20:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:47.764 06:20:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:47.764 06:20:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:47.764 06:20:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:47.764 06:20:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:47.764 06:20:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:47.764 06:20:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:47.764 06:20:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:47.764 06:20:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:47.764 06:20:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:47.764 06:20:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:47.764 06:20:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:47.764 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:47.764 06:20:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:47.764 06:20:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:47.764 06:20:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.764 06:20:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.764 06:20:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:47.764 06:20:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:47.764 06:20:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:47.764 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:47.764 06:20:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:47.764 06:20:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:47.764 06:20:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.764 06:20:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.764 06:20:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:47.764 06:20:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:47.764 06:20:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:47.764 06:20:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:47.764 06:20:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:47.764 06:20:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.764 06:20:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:47.764 06:20:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.764 06:20:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:47.764 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:47.764 06:20:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.764 06:20:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:47.764 06:20:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.764 06:20:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:47.764 06:20:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.764 06:20:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:47.764 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:47.764 06:20:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.764 06:20:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:47.764 06:20:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:47.764 06:20:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:47.764 06:20:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:47.764 06:20:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:47.764 06:20:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:47.764 06:20:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:47.764 06:20:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:47.764 06:20:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:47.764 06:20:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:47.764 06:20:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:47.764 06:20:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:47.764 06:20:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:47.764 06:20:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:47.764 06:20:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:47.764 06:20:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:47.764 06:20:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:47.764 06:20:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:47.764 06:20:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:47.764 06:20:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:47.764 06:20:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:47.764 06:20:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:47.764 06:20:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:47.765 06:20:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:47.765 06:20:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:47.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:25:47.765 00:25:47.765 --- 10.0.0.2 ping statistics --- 00:25:47.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.765 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:25:47.765 06:20:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:47.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:25:47.765 00:25:47.765 --- 10.0.0.1 ping statistics --- 00:25:47.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.765 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:25:47.765 06:20:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.765 06:20:54 -- nvmf/common.sh@410 -- # return 0 00:25:47.765 06:20:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:47.765 06:20:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.765 06:20:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:47.765 06:20:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:47.765 06:20:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.765 06:20:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:47.765 06:20:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:47.765 06:20:54 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:47.765 06:20:54 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:25:47.765 06:20:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:47.765 06:20:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:47.765 06:20:54 -- common/autotest_common.sh@10 -- # set +x 00:25:47.765 ************************************ 00:25:47.765 START TEST nvmf_digest_clean 00:25:47.765 ************************************ 00:25:47.765 06:20:54 -- common/autotest_common.sh@1104 -- # run_digest 00:25:47.765 06:20:54 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:25:47.765 06:20:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:47.765 06:20:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:47.765 06:20:54 -- common/autotest_common.sh@10 -- # set +x 00:25:47.765 06:20:54 -- nvmf/common.sh@469 -- # nvmfpid=1224904 00:25:47.765 06:20:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:47.765 06:20:54 -- nvmf/common.sh@470 -- # waitforlisten 1224904 00:25:47.765 06:20:54 -- common/autotest_common.sh@819 -- # '[' -z 1224904 ']' 00:25:47.765 06:20:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.765 06:20:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:47.765 06:20:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.765 06:20:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:47.765 06:20:54 -- common/autotest_common.sh@10 -- # set +x 00:25:47.765 [2024-07-13 06:20:54.257660] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:47.765 [2024-07-13 06:20:54.257730] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.024 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.024 [2024-07-13 06:20:54.325331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.024 [2024-07-13 06:20:54.430107] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:48.024 [2024-07-13 06:20:54.430268] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.024 [2024-07-13 06:20:54.430300] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.024 [2024-07-13 06:20:54.430314] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.024 [2024-07-13 06:20:54.430347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.024 06:20:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:48.024 06:20:54 -- common/autotest_common.sh@852 -- # return 0 00:25:48.024 06:20:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:48.024 06:20:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:48.024 06:20:54 -- common/autotest_common.sh@10 -- # set +x 00:25:48.024 06:20:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:48.024 06:20:54 -- host/digest.sh@120 -- # common_target_config 00:25:48.024 06:20:54 -- host/digest.sh@43 -- # rpc_cmd 00:25:48.024 06:20:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.024 06:20:54 -- common/autotest_common.sh@10 -- # set +x 00:25:48.282 null0 00:25:48.282 [2024-07-13 06:20:54.593699] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:48.282 [2024-07-13 06:20:54.617939] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.282 06:20:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.282 06:20:54 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:25:48.282 06:20:54 -- host/digest.sh@77 -- # local rw bs qd 00:25:48.282 06:20:54 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:48.282 06:20:54 -- host/digest.sh@80 -- # rw=randread 00:25:48.282 06:20:54 -- host/digest.sh@80 -- # bs=4096 00:25:48.282 06:20:54 -- host/digest.sh@80 -- # qd=128 00:25:48.282 06:20:54 -- host/digest.sh@82 -- # bperfpid=1225027 00:25:48.282 06:20:54 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:48.282 06:20:54 -- host/digest.sh@83 -- # waitforlisten 1225027 /var/tmp/bperf.sock 00:25:48.282 06:20:54 -- common/autotest_common.sh@819 -- # '[' -z 1225027 ']' 00:25:48.282 06:20:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:48.282 06:20:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:48.282 06:20:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:48.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:48.282 06:20:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:48.282 06:20:54 -- common/autotest_common.sh@10 -- # set +x 00:25:48.282 [2024-07-13 06:20:54.660283] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:48.282 [2024-07-13 06:20:54.660343] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225027 ] 00:25:48.282 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.282 [2024-07-13 06:20:54.721285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.540 [2024-07-13 06:20:54.835456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.106 06:20:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:49.106 06:20:55 -- common/autotest_common.sh@852 -- # return 0 00:25:49.106 06:20:55 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:25:49.106 06:20:55 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:25:49.106 06:20:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:49.672 06:20:55 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:49.672 06:20:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:49.930 nvme0n1 00:25:49.930 06:20:56 -- host/digest.sh@91 -- # bperf_py perform_tests 00:25:49.930 06:20:56 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:49.930 Running I/O for 2 seconds... 00:25:52.459 00:25:52.459 Latency(us) 00:25:52.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.459 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:52.459 nvme0n1 : 2.01 16140.39 63.05 0.00 0.00 7917.12 2694.26 16311.18 00:25:52.459 =================================================================================================================== 00:25:52.459 Total : 16140.39 63.05 0.00 0.00 7917.12 2694.26 16311.18 00:25:52.459 0 00:25:52.459 06:20:58 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:25:52.459 06:20:58 -- host/digest.sh@92 -- # get_accel_stats 00:25:52.459 06:20:58 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:52.459 06:20:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:52.459 06:20:58 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:52.459 | select(.opcode=="crc32c") 00:25:52.459 | "\(.module_name) \(.executed)"' 00:25:52.459 06:20:58 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:25:52.459 06:20:58 -- host/digest.sh@93 -- # exp_module=software 00:25:52.459 06:20:58 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:25:52.459 06:20:58 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:52.459 06:20:58 -- host/digest.sh@97 -- # killprocess 1225027 00:25:52.459 06:20:58 -- common/autotest_common.sh@926 -- # '[' -z 1225027 ']' 00:25:52.459 06:20:58 -- common/autotest_common.sh@930 -- # kill -0 1225027 00:25:52.459 06:20:58 -- common/autotest_common.sh@931 -- # uname 00:25:52.459 06:20:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:52.459 06:20:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1225027 00:25:52.460 06:20:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:52.460 06:20:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:52.460 06:20:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1225027' 00:25:52.460 killing process with pid 1225027 00:25:52.460 06:20:58 -- common/autotest_common.sh@945 -- # kill 1225027 00:25:52.460 Received shutdown signal, test time was about 2.000000 seconds 00:25:52.460 00:25:52.460 Latency(us) 00:25:52.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:52.460 =================================================================================================================== 00:25:52.460 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:52.460 06:20:58 -- common/autotest_common.sh@950 -- # wait 1225027 00:25:52.460 06:20:58 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:25:52.460 06:20:58 -- host/digest.sh@77 -- # local rw bs qd 00:25:52.460 06:20:58 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:52.460 06:20:58 -- host/digest.sh@80 -- # rw=randread 00:25:52.460 06:20:58 -- host/digest.sh@80 -- # bs=131072 00:25:52.460 06:20:58 -- host/digest.sh@80 -- # qd=16 00:25:52.460 06:20:58 -- host/digest.sh@82 -- # bperfpid=1225573 00:25:52.460 06:20:58 -- host/digest.sh@83 -- # waitforlisten 1225573 /var/tmp/bperf.sock 00:25:52.460 06:20:58 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:52.460 06:20:58 -- common/autotest_common.sh@819 -- # '[' -z 1225573 ']' 00:25:52.460 06:20:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:52.460 06:20:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:52.460 06:20:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:52.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:52.460 06:20:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:52.460 06:20:58 -- common/autotest_common.sh@10 -- # set +x 00:25:52.460 [2024-07-13 06:20:58.933082] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:52.460 [2024-07-13 06:20:58.933172] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225573 ] 00:25:52.460 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:52.460 Zero copy mechanism will not be used. 00:25:52.460 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.718 [2024-07-13 06:20:58.993457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:52.718 [2024-07-13 06:20:59.102564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.718 06:20:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:52.718 06:20:59 -- common/autotest_common.sh@852 -- # return 0 00:25:52.718 06:20:59 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:25:52.718 06:20:59 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:25:52.718 06:20:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:52.977 06:20:59 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.977 06:20:59 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:53.544 nvme0n1 00:25:53.544 06:20:59 -- host/digest.sh@91 -- # bperf_py perform_tests 00:25:53.544 06:20:59 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:53.544 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:53.544 Zero copy mechanism will not be used. 00:25:53.544 Running I/O for 2 seconds... 00:25:56.069 00:25:56.069 Latency(us) 00:25:56.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.069 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:56.069 nvme0n1 : 2.00 3482.60 435.33 0.00 0.00 4590.06 3810.80 8543.95 00:25:56.069 =================================================================================================================== 00:25:56.069 Total : 3482.60 435.33 0.00 0.00 4590.06 3810.80 8543.95 00:25:56.069 0 00:25:56.069 06:21:02 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:25:56.069 06:21:02 -- host/digest.sh@92 -- # get_accel_stats 00:25:56.069 06:21:02 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:56.069 06:21:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:56.069 06:21:02 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:56.069 | select(.opcode=="crc32c") 00:25:56.069 | "\(.module_name) \(.executed)"' 00:25:56.069 06:21:02 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:25:56.069 06:21:02 -- host/digest.sh@93 -- # exp_module=software 00:25:56.069 06:21:02 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:25:56.069 06:21:02 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:56.069 06:21:02 -- host/digest.sh@97 -- # killprocess 1225573 00:25:56.069 06:21:02 -- common/autotest_common.sh@926 -- # '[' -z 1225573 ']' 00:25:56.069 06:21:02 -- common/autotest_common.sh@930 -- # kill -0 1225573 00:25:56.069 06:21:02 -- common/autotest_common.sh@931 -- # uname 00:25:56.069 06:21:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:56.069 06:21:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1225573 00:25:56.069 06:21:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:56.069 06:21:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:56.069 06:21:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1225573' 00:25:56.069 killing process with pid 1225573 00:25:56.069 06:21:02 -- common/autotest_common.sh@945 -- # kill 1225573 00:25:56.069 Received shutdown signal, test time was about 2.000000 seconds 00:25:56.069 00:25:56.069 Latency(us) 00:25:56.069 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.069 =================================================================================================================== 00:25:56.069 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:56.069 06:21:02 -- common/autotest_common.sh@950 -- # wait 1225573 00:25:56.326 06:21:02 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:25:56.326 06:21:02 -- host/digest.sh@77 -- # local rw bs qd 00:25:56.326 06:21:02 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:56.326 06:21:02 -- host/digest.sh@80 -- # rw=randwrite 00:25:56.326 06:21:02 -- host/digest.sh@80 -- # bs=4096 00:25:56.326 06:21:02 -- host/digest.sh@80 -- # qd=128 00:25:56.326 06:21:02 -- host/digest.sh@82 -- # bperfpid=1226111 00:25:56.326 06:21:02 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:56.326 06:21:02 -- host/digest.sh@83 -- # waitforlisten 1226111 /var/tmp/bperf.sock 00:25:56.326 06:21:02 -- common/autotest_common.sh@819 -- # '[' -z 1226111 ']' 00:25:56.326 06:21:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:56.326 06:21:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:56.326 06:21:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:56.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:56.326 06:21:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:56.326 06:21:02 -- common/autotest_common.sh@10 -- # set +x 00:25:56.326 [2024-07-13 06:21:02.654219] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:56.326 [2024-07-13 06:21:02.654312] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226111 ] 00:25:56.326 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.326 [2024-07-13 06:21:02.718028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.326 [2024-07-13 06:21:02.828726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.583 06:21:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:56.583 06:21:02 -- common/autotest_common.sh@852 -- # return 0 00:25:56.583 06:21:02 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:25:56.583 06:21:02 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:25:56.583 06:21:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:56.840 06:21:03 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:56.840 06:21:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:57.097 nvme0n1 00:25:57.097 06:21:03 -- host/digest.sh@91 -- # bperf_py perform_tests 00:25:57.097 06:21:03 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:57.354 Running I/O for 2 seconds... 00:25:59.250 00:25:59.250 Latency(us) 00:25:59.250 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.250 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:59.250 nvme0n1 : 2.01 19403.41 75.79 0.00 0.00 6582.77 5971.06 12718.84 00:25:59.250 =================================================================================================================== 00:25:59.250 Total : 19403.41 75.79 0.00 0.00 6582.77 5971.06 12718.84 00:25:59.250 0 00:25:59.250 06:21:05 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:25:59.250 06:21:05 -- host/digest.sh@92 -- # get_accel_stats 00:25:59.250 06:21:05 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:59.250 06:21:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:59.250 06:21:05 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:59.250 | select(.opcode=="crc32c") 00:25:59.250 | "\(.module_name) \(.executed)"' 00:25:59.507 06:21:05 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:25:59.507 06:21:05 -- host/digest.sh@93 -- # exp_module=software 00:25:59.507 06:21:05 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:25:59.507 06:21:05 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:59.507 06:21:05 -- host/digest.sh@97 -- # killprocess 1226111 00:25:59.507 06:21:05 -- common/autotest_common.sh@926 -- # '[' -z 1226111 ']' 00:25:59.507 06:21:05 -- common/autotest_common.sh@930 -- # kill -0 1226111 00:25:59.507 06:21:05 -- common/autotest_common.sh@931 -- # uname 00:25:59.507 06:21:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:59.507 06:21:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1226111 00:25:59.507 06:21:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:59.507 06:21:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:59.507 06:21:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1226111' 00:25:59.507 killing process with pid 1226111 00:25:59.507 06:21:05 -- common/autotest_common.sh@945 -- # kill 1226111 00:25:59.507 Received shutdown signal, test time was about 2.000000 seconds 00:25:59.507 00:25:59.507 Latency(us) 00:25:59.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.507 =================================================================================================================== 00:25:59.507 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:59.507 06:21:05 -- common/autotest_common.sh@950 -- # wait 1226111 00:25:59.765 06:21:06 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:25:59.765 06:21:06 -- host/digest.sh@77 -- # local rw bs qd 00:25:59.765 06:21:06 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:59.765 06:21:06 -- host/digest.sh@80 -- # rw=randwrite 00:25:59.765 06:21:06 -- host/digest.sh@80 -- # bs=131072 00:25:59.765 06:21:06 -- host/digest.sh@80 -- # qd=16 00:25:59.765 06:21:06 -- host/digest.sh@82 -- # bperfpid=1226526 00:25:59.765 06:21:06 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:59.765 06:21:06 -- host/digest.sh@83 -- # waitforlisten 1226526 /var/tmp/bperf.sock 00:25:59.765 06:21:06 -- common/autotest_common.sh@819 -- # '[' -z 1226526 ']' 00:25:59.765 06:21:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:59.765 06:21:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:59.765 06:21:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:59.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:59.765 06:21:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:59.765 06:21:06 -- common/autotest_common.sh@10 -- # set +x 00:25:59.765 [2024-07-13 06:21:06.204504] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:59.766 [2024-07-13 06:21:06.204599] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226526 ] 00:25:59.766 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:59.766 Zero copy mechanism will not be used. 00:25:59.766 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.766 [2024-07-13 06:21:06.271805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.024 [2024-07-13 06:21:06.385674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.024 06:21:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:00.024 06:21:06 -- common/autotest_common.sh@852 -- # return 0 00:26:00.024 06:21:06 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:00.024 06:21:06 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:00.024 06:21:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:00.282 06:21:06 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:00.282 06:21:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:00.856 nvme0n1 00:26:00.856 06:21:07 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:00.856 06:21:07 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:00.856 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:00.856 Zero copy mechanism will not be used. 00:26:00.856 Running I/O for 2 seconds... 00:26:02.765 00:26:02.765 Latency(us) 00:26:02.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.765 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:02.765 nvme0n1 : 2.01 2960.65 370.08 0.00 0.00 5391.79 3519.53 11408.12 00:26:02.765 =================================================================================================================== 00:26:02.765 Total : 2960.65 370.08 0.00 0.00 5391.79 3519.53 11408.12 00:26:02.765 0 00:26:02.765 06:21:09 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:02.765 06:21:09 -- host/digest.sh@92 -- # get_accel_stats 00:26:02.765 06:21:09 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:02.765 06:21:09 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:02.765 | select(.opcode=="crc32c") 00:26:02.765 | "\(.module_name) \(.executed)"' 00:26:02.765 06:21:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:03.056 06:21:09 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:03.056 06:21:09 -- host/digest.sh@93 -- # exp_module=software 00:26:03.056 06:21:09 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:03.056 06:21:09 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:03.056 06:21:09 -- host/digest.sh@97 -- # killprocess 1226526 00:26:03.056 06:21:09 -- common/autotest_common.sh@926 -- # '[' -z 1226526 ']' 00:26:03.056 06:21:09 -- common/autotest_common.sh@930 -- # kill -0 1226526 00:26:03.056 06:21:09 -- common/autotest_common.sh@931 -- # uname 00:26:03.056 06:21:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:03.056 06:21:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1226526 00:26:03.056 06:21:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:03.056 06:21:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:03.056 06:21:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1226526' 00:26:03.056 killing process with pid 1226526 00:26:03.056 06:21:09 -- common/autotest_common.sh@945 -- # kill 1226526 00:26:03.056 Received shutdown signal, test time was about 2.000000 seconds 00:26:03.056 00:26:03.056 Latency(us) 00:26:03.056 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.056 =================================================================================================================== 00:26:03.056 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:03.056 06:21:09 -- common/autotest_common.sh@950 -- # wait 1226526 00:26:03.316 06:21:09 -- host/digest.sh@126 -- # killprocess 1224904 00:26:03.316 06:21:09 -- common/autotest_common.sh@926 -- # '[' -z 1224904 ']' 00:26:03.316 06:21:09 -- common/autotest_common.sh@930 -- # kill -0 1224904 00:26:03.316 06:21:09 -- common/autotest_common.sh@931 -- # uname 00:26:03.316 06:21:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:03.316 06:21:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1224904 00:26:03.316 06:21:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:03.316 06:21:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:03.316 06:21:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1224904' 00:26:03.316 killing process with pid 1224904 00:26:03.316 06:21:09 -- common/autotest_common.sh@945 -- # kill 1224904 00:26:03.316 06:21:09 -- common/autotest_common.sh@950 -- # wait 1224904 00:26:03.574 00:26:03.574 real 0m15.832s 00:26:03.574 user 0m31.427s 00:26:03.574 sys 0m4.186s 00:26:03.574 06:21:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:03.574 06:21:10 -- common/autotest_common.sh@10 -- # set +x 00:26:03.574 ************************************ 00:26:03.574 END TEST nvmf_digest_clean 00:26:03.574 ************************************ 00:26:03.574 06:21:10 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:26:03.574 06:21:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:03.574 06:21:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:03.574 06:21:10 -- common/autotest_common.sh@10 -- # set +x 00:26:03.574 ************************************ 00:26:03.574 START TEST nvmf_digest_error 00:26:03.574 ************************************ 00:26:03.574 06:21:10 -- common/autotest_common.sh@1104 -- # run_digest_error 00:26:03.574 06:21:10 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:26:03.574 06:21:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:03.574 06:21:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:03.574 06:21:10 -- common/autotest_common.sh@10 -- # set +x 00:26:03.574 06:21:10 -- nvmf/common.sh@469 -- # nvmfpid=1227498 00:26:03.574 06:21:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:03.574 06:21:10 -- nvmf/common.sh@470 -- # waitforlisten 1227498 00:26:03.574 06:21:10 -- common/autotest_common.sh@819 -- # '[' -z 1227498 ']' 00:26:03.574 06:21:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.574 06:21:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:03.574 06:21:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.574 06:21:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:03.574 06:21:10 -- common/autotest_common.sh@10 -- # set +x 00:26:03.833 [2024-07-13 06:21:10.117723] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:03.833 [2024-07-13 06:21:10.117811] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.833 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.833 [2024-07-13 06:21:10.187270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.833 [2024-07-13 06:21:10.301300] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:03.833 [2024-07-13 06:21:10.301478] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.833 [2024-07-13 06:21:10.301500] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.833 [2024-07-13 06:21:10.301527] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.833 [2024-07-13 06:21:10.301558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.766 06:21:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:04.766 06:21:11 -- common/autotest_common.sh@852 -- # return 0 00:26:04.766 06:21:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:04.766 06:21:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:04.766 06:21:11 -- common/autotest_common.sh@10 -- # set +x 00:26:04.766 06:21:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.766 06:21:11 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:04.766 06:21:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.766 06:21:11 -- common/autotest_common.sh@10 -- # set +x 00:26:04.766 [2024-07-13 06:21:11.067949] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:04.766 06:21:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.766 06:21:11 -- host/digest.sh@104 -- # common_target_config 00:26:04.766 06:21:11 -- host/digest.sh@43 -- # rpc_cmd 00:26:04.766 06:21:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:04.766 06:21:11 -- common/autotest_common.sh@10 -- # set +x 00:26:04.766 null0 00:26:04.766 [2024-07-13 06:21:11.182195] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.766 [2024-07-13 06:21:11.206433] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.766 06:21:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:04.766 06:21:11 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:26:04.766 06:21:11 -- host/digest.sh@54 -- # local rw bs qd 00:26:04.766 06:21:11 -- host/digest.sh@56 -- # rw=randread 00:26:04.766 06:21:11 -- host/digest.sh@56 -- # bs=4096 00:26:04.766 06:21:11 -- host/digest.sh@56 -- # qd=128 00:26:04.766 06:21:11 -- host/digest.sh@58 -- # bperfpid=1227656 00:26:04.766 06:21:11 -- host/digest.sh@60 -- # waitforlisten 1227656 /var/tmp/bperf.sock 00:26:04.766 06:21:11 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:04.766 06:21:11 -- common/autotest_common.sh@819 -- # '[' -z 1227656 ']' 00:26:04.766 06:21:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:04.766 06:21:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:04.766 06:21:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:04.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:04.766 06:21:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:04.766 06:21:11 -- common/autotest_common.sh@10 -- # set +x 00:26:04.766 [2024-07-13 06:21:11.250142] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:04.766 [2024-07-13 06:21:11.250235] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227656 ] 00:26:05.024 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.024 [2024-07-13 06:21:11.315086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.024 [2024-07-13 06:21:11.430014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.957 06:21:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:05.957 06:21:12 -- common/autotest_common.sh@852 -- # return 0 00:26:05.957 06:21:12 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:05.957 06:21:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:05.957 06:21:12 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:05.957 06:21:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:05.957 06:21:12 -- common/autotest_common.sh@10 -- # set +x 00:26:05.957 06:21:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:05.957 06:21:12 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:05.957 06:21:12 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.531 nvme0n1 00:26:06.531 06:21:12 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:06.531 06:21:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.531 06:21:12 -- common/autotest_common.sh@10 -- # set +x 00:26:06.531 06:21:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.531 06:21:12 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:06.531 06:21:12 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:06.531 Running I/O for 2 seconds... 00:26:06.531 [2024-07-13 06:21:12.989952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.531 [2024-07-13 06:21:12.990000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:14517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.531 [2024-07-13 06:21:12.990020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.531 [2024-07-13 06:21:13.005418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.531 [2024-07-13 06:21:13.005463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.531 [2024-07-13 06:21:13.005482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.531 [2024-07-13 06:21:13.020934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.531 [2024-07-13 06:21:13.020965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.531 [2024-07-13 06:21:13.020982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.531 [2024-07-13 06:21:13.036345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.531 [2024-07-13 06:21:13.036374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.531 [2024-07-13 06:21:13.036390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.791 [2024-07-13 06:21:13.051944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.791 [2024-07-13 06:21:13.051976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.791 [2024-07-13 06:21:13.051994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.791 [2024-07-13 06:21:13.067114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.791 [2024-07-13 06:21:13.067162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.791 [2024-07-13 06:21:13.067179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.791 [2024-07-13 06:21:13.082563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.791 [2024-07-13 06:21:13.082593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.791 [2024-07-13 06:21:13.082610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.792 [2024-07-13 06:21:13.097966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.792 [2024-07-13 06:21:13.097996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.792 [2024-07-13 06:21:13.098013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.792 [2024-07-13 06:21:13.112936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.792 [2024-07-13 06:21:13.112974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.792 [2024-07-13 06:21:13.112992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.792 [2024-07-13 06:21:13.128174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.792 [2024-07-13 06:21:13.128217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.792 [2024-07-13 06:21:13.128234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.792 [2024-07-13 06:21:13.143069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.792 [2024-07-13 06:21:13.143100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:8990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.792 [2024-07-13 06:21:13.143118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.792 [2024-07-13 06:21:13.158326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.792 [2024-07-13 06:21:13.158355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.792 [2024-07-13 06:21:13.158371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.792 [2024-07-13 06:21:13.173397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.792 [2024-07-13 06:21:13.173428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.792 [2024-07-13 06:21:13.173445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.792 [2024-07-13 06:21:13.188716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.792 [2024-07-13 06:21:13.188745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.792 [2024-07-13 06:21:13.188761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.792 [2024-07-13 06:21:13.203904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.792 [2024-07-13 06:21:13.203936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.792 [2024-07-13 06:21:13.203969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.792 [2024-07-13 06:21:13.218485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.792 [2024-07-13 06:21:13.218529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.792 [2024-07-13 06:21:13.218545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.792 [2024-07-13 06:21:13.233429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.792 [2024-07-13 06:21:13.233458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.792 [2024-07-13 06:21:13.233473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.792 [2024-07-13 06:21:13.248810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.792 [2024-07-13 06:21:13.248838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.792 [2024-07-13 06:21:13.248878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.792 [2024-07-13 06:21:13.263993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.792 [2024-07-13 06:21:13.264023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.792 [2024-07-13 06:21:13.264041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.792 [2024-07-13 06:21:13.278942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.792 [2024-07-13 06:21:13.278973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.792 [2024-07-13 06:21:13.278990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.792 [2024-07-13 06:21:13.293715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:06.792 [2024-07-13 06:21:13.293743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.792 [2024-07-13 06:21:13.293760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.049 [2024-07-13 06:21:13.309063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.049 [2024-07-13 06:21:13.309093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.049 [2024-07-13 06:21:13.309110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.049 [2024-07-13 06:21:13.324433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.049 [2024-07-13 06:21:13.324462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.049 [2024-07-13 06:21:13.324478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.049 [2024-07-13 06:21:13.339638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.049 [2024-07-13 06:21:13.339667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.049 [2024-07-13 06:21:13.339683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.049 [2024-07-13 06:21:13.354598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.049 [2024-07-13 06:21:13.354627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.049 [2024-07-13 06:21:13.354643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.049 [2024-07-13 06:21:13.369895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.049 [2024-07-13 06:21:13.369924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.049 [2024-07-13 06:21:13.369949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.049 [2024-07-13 06:21:13.384653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.049 [2024-07-13 06:21:13.384681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.049 [2024-07-13 06:21:13.384698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.049 [2024-07-13 06:21:13.399554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.049 [2024-07-13 06:21:13.399583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.049 [2024-07-13 06:21:13.399599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.049 [2024-07-13 06:21:13.414602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.049 [2024-07-13 06:21:13.414630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.049 [2024-07-13 06:21:13.414647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.049 [2024-07-13 06:21:13.429658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.049 [2024-07-13 06:21:13.429687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.049 [2024-07-13 06:21:13.429703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.049 [2024-07-13 06:21:13.444900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.049 [2024-07-13 06:21:13.444932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.049 [2024-07-13 06:21:13.444958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.049 [2024-07-13 06:21:13.460452] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.049 [2024-07-13 06:21:13.460481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.049 [2024-07-13 06:21:13.460497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.049 [2024-07-13 06:21:13.475806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.049 [2024-07-13 06:21:13.475835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.049 [2024-07-13 06:21:13.475872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.049 [2024-07-13 06:21:13.491288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.049 [2024-07-13 06:21:13.491317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.049 [2024-07-13 06:21:13.491333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.050 [2024-07-13 06:21:13.506633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.050 [2024-07-13 06:21:13.506667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.050 [2024-07-13 06:21:13.506685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.050 [2024-07-13 06:21:13.522097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.050 [2024-07-13 06:21:13.522127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.050 [2024-07-13 06:21:13.522160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.050 [2024-07-13 06:21:13.538375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.050 [2024-07-13 06:21:13.538410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.050 [2024-07-13 06:21:13.538429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.050 [2024-07-13 06:21:13.554212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.050 [2024-07-13 06:21:13.554256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.050 [2024-07-13 06:21:13.554276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.306 [2024-07-13 06:21:13.571355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.306 [2024-07-13 06:21:13.571389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-07-13 06:21:13.571408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.306 [2024-07-13 06:21:13.587696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.306 [2024-07-13 06:21:13.587732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:10079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-07-13 06:21:13.587752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.306 [2024-07-13 06:21:13.604994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.306 [2024-07-13 06:21:13.605035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-07-13 06:21:13.605053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.306 [2024-07-13 06:21:13.621268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.306 [2024-07-13 06:21:13.621302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-07-13 06:21:13.621321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.306 [2024-07-13 06:21:13.637151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.306 [2024-07-13 06:21:13.637194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-07-13 06:21:13.637226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.306 [2024-07-13 06:21:13.654355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.306 [2024-07-13 06:21:13.654391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-07-13 06:21:13.654410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.306 [2024-07-13 06:21:13.669935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.306 [2024-07-13 06:21:13.669964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-07-13 06:21:13.669980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.306 [2024-07-13 06:21:13.686857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.306 [2024-07-13 06:21:13.686900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-07-13 06:21:13.686933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.306 [2024-07-13 06:21:13.702444] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.306 [2024-07-13 06:21:13.702480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-07-13 06:21:13.702500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.306 [2024-07-13 06:21:13.719190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.306 [2024-07-13 06:21:13.719225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-07-13 06:21:13.719244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.306 [2024-07-13 06:21:13.736132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.306 [2024-07-13 06:21:13.736179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-07-13 06:21:13.736198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.306 [2024-07-13 06:21:13.753032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.306 [2024-07-13 06:21:13.753063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-07-13 06:21:13.753095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.306 [2024-07-13 06:21:13.769801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.306 [2024-07-13 06:21:13.769835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-07-13 06:21:13.769855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.306 [2024-07-13 06:21:13.786206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.306 [2024-07-13 06:21:13.786258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-07-13 06:21:13.786279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.306 [2024-07-13 06:21:13.802230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.306 [2024-07-13 06:21:13.802264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.306 [2024-07-13 06:21:13.802283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.564 [2024-07-13 06:21:13.818561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.564 [2024-07-13 06:21:13.818596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.564 [2024-07-13 06:21:13.818615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.564 [2024-07-13 06:21:13.834542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.564 [2024-07-13 06:21:13.834578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.564 [2024-07-13 06:21:13.834597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.564 [2024-07-13 06:21:13.851643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.564 [2024-07-13 06:21:13.851679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.564 [2024-07-13 06:21:13.851698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.564 [2024-07-13 06:21:13.868334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.564 [2024-07-13 06:21:13.868369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:11805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.564 [2024-07-13 06:21:13.868388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.564 [2024-07-13 06:21:13.884964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.564 [2024-07-13 06:21:13.884994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.564 [2024-07-13 06:21:13.885025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.564 [2024-07-13 06:21:13.901769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.564 [2024-07-13 06:21:13.901804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.564 [2024-07-13 06:21:13.901823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.564 [2024-07-13 06:21:13.917681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.564 [2024-07-13 06:21:13.917715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.564 [2024-07-13 06:21:13.917735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.564 [2024-07-13 06:21:13.934193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.564 [2024-07-13 06:21:13.934228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.564 [2024-07-13 06:21:13.934247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.564 [2024-07-13 06:21:13.950692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.564 [2024-07-13 06:21:13.950726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.564 [2024-07-13 06:21:13.950745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.564 [2024-07-13 06:21:13.967130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.564 [2024-07-13 06:21:13.967174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.564 [2024-07-13 06:21:13.967194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.564 [2024-07-13 06:21:13.990047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.564 [2024-07-13 06:21:13.990077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.564 [2024-07-13 06:21:13.990093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.564 [2024-07-13 06:21:14.006506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.564 [2024-07-13 06:21:14.006539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.564 [2024-07-13 06:21:14.006559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.564 [2024-07-13 06:21:14.022952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.564 [2024-07-13 06:21:14.022982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.564 [2024-07-13 06:21:14.022998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.564 [2024-07-13 06:21:14.039434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.564 [2024-07-13 06:21:14.039468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.564 [2024-07-13 06:21:14.039488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.564 [2024-07-13 06:21:14.055845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.564 [2024-07-13 06:21:14.055886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.564 [2024-07-13 06:21:14.055920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.564 [2024-07-13 06:21:14.072567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.564 [2024-07-13 06:21:14.072601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.564 [2024-07-13 06:21:14.072628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.822 [2024-07-13 06:21:14.089088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.822 [2024-07-13 06:21:14.089118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.822 [2024-07-13 06:21:14.089134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.822 [2024-07-13 06:21:14.104027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.822 [2024-07-13 06:21:14.104059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.822 [2024-07-13 06:21:14.104077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.822 [2024-07-13 06:21:14.115687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.822 [2024-07-13 06:21:14.115721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.822 [2024-07-13 06:21:14.115740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.822 [2024-07-13 06:21:14.131681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.822 [2024-07-13 06:21:14.131716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.822 [2024-07-13 06:21:14.131736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.822 [2024-07-13 06:21:14.147884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.823 [2024-07-13 06:21:14.147932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.823 [2024-07-13 06:21:14.147948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.823 [2024-07-13 06:21:14.170135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.823 [2024-07-13 06:21:14.170166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.823 [2024-07-13 06:21:14.170201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.823 [2024-07-13 06:21:14.186387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.823 [2024-07-13 06:21:14.186422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.823 [2024-07-13 06:21:14.186442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.823 [2024-07-13 06:21:14.203006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.823 [2024-07-13 06:21:14.203036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.823 [2024-07-13 06:21:14.203052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.823 [2024-07-13 06:21:14.219005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.823 [2024-07-13 06:21:14.219038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.823 [2024-07-13 06:21:14.219055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.823 [2024-07-13 06:21:14.235739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.823 [2024-07-13 06:21:14.235774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.823 [2024-07-13 06:21:14.235794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.823 [2024-07-13 06:21:14.252508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.823 [2024-07-13 06:21:14.252542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.823 [2024-07-13 06:21:14.252561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.823 [2024-07-13 06:21:14.268558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.823 [2024-07-13 06:21:14.268593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.823 [2024-07-13 06:21:14.268612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.823 [2024-07-13 06:21:14.284987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.823 [2024-07-13 06:21:14.285016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.823 [2024-07-13 06:21:14.285032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.823 [2024-07-13 06:21:14.301474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.823 [2024-07-13 06:21:14.301508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.823 [2024-07-13 06:21:14.301527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:07.823 [2024-07-13 06:21:14.318268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:07.823 [2024-07-13 06:21:14.318303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:07.823 [2024-07-13 06:21:14.318322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.081 [2024-07-13 06:21:14.334657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.081 [2024-07-13 06:21:14.334692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.081 [2024-07-13 06:21:14.334712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.082 [2024-07-13 06:21:14.351751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.082 [2024-07-13 06:21:14.351786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.082 [2024-07-13 06:21:14.351805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.082 [2024-07-13 06:21:14.368479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.082 [2024-07-13 06:21:14.368514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.082 [2024-07-13 06:21:14.368533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.082 [2024-07-13 06:21:14.385076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.082 [2024-07-13 06:21:14.385104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.082 [2024-07-13 06:21:14.385120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.082 [2024-07-13 06:21:14.401507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.082 [2024-07-13 06:21:14.401541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.082 [2024-07-13 06:21:14.401561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.082 [2024-07-13 06:21:14.417500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.082 [2024-07-13 06:21:14.417535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.082 [2024-07-13 06:21:14.417554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.082 [2024-07-13 06:21:14.433993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.082 [2024-07-13 06:21:14.434023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.082 [2024-07-13 06:21:14.434040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.082 [2024-07-13 06:21:14.450026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.082 [2024-07-13 06:21:14.450055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.082 [2024-07-13 06:21:14.450071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.082 [2024-07-13 06:21:14.465887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.082 [2024-07-13 06:21:14.465932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.082 [2024-07-13 06:21:14.465949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.082 [2024-07-13 06:21:14.482017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.082 [2024-07-13 06:21:14.482046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.082 [2024-07-13 06:21:14.482063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.082 [2024-07-13 06:21:14.498804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.082 [2024-07-13 06:21:14.498843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.082 [2024-07-13 06:21:14.498862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.082 [2024-07-13 06:21:14.515222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.082 [2024-07-13 06:21:14.515256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.082 [2024-07-13 06:21:14.515275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.082 [2024-07-13 06:21:14.531175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.082 [2024-07-13 06:21:14.531224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.082 [2024-07-13 06:21:14.531244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.082 [2024-07-13 06:21:14.547080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.082 [2024-07-13 06:21:14.547121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.082 [2024-07-13 06:21:14.547138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.082 [2024-07-13 06:21:14.563564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.082 [2024-07-13 06:21:14.563598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.082 [2024-07-13 06:21:14.563618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.082 [2024-07-13 06:21:14.579948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.082 [2024-07-13 06:21:14.579976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.082 [2024-07-13 06:21:14.579992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.340 [2024-07-13 06:21:14.596924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.340 [2024-07-13 06:21:14.596955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.340 [2024-07-13 06:21:14.596972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.340 [2024-07-13 06:21:14.613406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.340 [2024-07-13 06:21:14.613440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.340 [2024-07-13 06:21:14.613460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.340 [2024-07-13 06:21:14.630093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.340 [2024-07-13 06:21:14.630123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.340 [2024-07-13 06:21:14.630139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.340 [2024-07-13 06:21:14.646362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.340 [2024-07-13 06:21:14.646396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.340 [2024-07-13 06:21:14.646415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.340 [2024-07-13 06:21:14.662596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.340 [2024-07-13 06:21:14.662643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.341 [2024-07-13 06:21:14.662662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.341 [2024-07-13 06:21:14.679477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.341 [2024-07-13 06:21:14.679512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.341 [2024-07-13 06:21:14.679531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.341 [2024-07-13 06:21:14.696021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.341 [2024-07-13 06:21:14.696051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.341 [2024-07-13 06:21:14.696068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.341 [2024-07-13 06:21:14.712465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.341 [2024-07-13 06:21:14.712500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.341 [2024-07-13 06:21:14.712519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.341 [2024-07-13 06:21:14.728422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.341 [2024-07-13 06:21:14.728456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.341 [2024-07-13 06:21:14.728480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.341 [2024-07-13 06:21:14.745032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.341 [2024-07-13 06:21:14.745063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.341 [2024-07-13 06:21:14.745080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.341 [2024-07-13 06:21:14.761703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.341 [2024-07-13 06:21:14.761736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.341 [2024-07-13 06:21:14.761755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.341 [2024-07-13 06:21:14.778529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.341 [2024-07-13 06:21:14.778563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.341 [2024-07-13 06:21:14.778591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.341 [2024-07-13 06:21:14.795069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.341 [2024-07-13 06:21:14.795100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.341 [2024-07-13 06:21:14.795120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.341 [2024-07-13 06:21:14.811726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.341 [2024-07-13 06:21:14.811761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20465 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.341 [2024-07-13 06:21:14.811780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.341 [2024-07-13 06:21:14.828151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.341 [2024-07-13 06:21:14.828207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.341 [2024-07-13 06:21:14.828227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.341 [2024-07-13 06:21:14.844691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.341 [2024-07-13 06:21:14.844725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.341 [2024-07-13 06:21:14.844744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.599 [2024-07-13 06:21:14.862392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.599 [2024-07-13 06:21:14.862426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.599 [2024-07-13 06:21:14.862445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.599 [2024-07-13 06:21:14.878797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.599 [2024-07-13 06:21:14.878831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.599 [2024-07-13 06:21:14.878852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.599 [2024-07-13 06:21:14.895448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.599 [2024-07-13 06:21:14.895482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.599 [2024-07-13 06:21:14.895501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.599 [2024-07-13 06:21:14.912118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.599 [2024-07-13 06:21:14.912145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.599 [2024-07-13 06:21:14.912178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.599 [2024-07-13 06:21:14.928673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.599 [2024-07-13 06:21:14.928712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.599 [2024-07-13 06:21:14.928731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.599 [2024-07-13 06:21:14.945150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.599 [2024-07-13 06:21:14.945195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.599 [2024-07-13 06:21:14.945211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.599 [2024-07-13 06:21:14.961929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10b7f00) 00:26:08.599 [2024-07-13 06:21:14.961957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.599 [2024-07-13 06:21:14.961976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:08.599 00:26:08.599 Latency(us) 00:26:08.599 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.599 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:08.599 nvme0n1 : 2.04 15436.17 60.30 0.00 0.00 8122.31 3592.34 47768.46 00:26:08.599 =================================================================================================================== 00:26:08.599 Total : 15436.17 60.30 0.00 0.00 8122.31 3592.34 47768.46 00:26:08.599 0 00:26:08.599 06:21:15 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:08.599 06:21:15 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:08.599 06:21:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:08.599 06:21:15 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:08.599 | .driver_specific 00:26:08.599 | .nvme_error 00:26:08.599 | .status_code 00:26:08.599 | .command_transient_transport_error' 00:26:08.857 06:21:15 -- host/digest.sh@71 -- # (( 123 > 0 )) 00:26:08.857 06:21:15 -- host/digest.sh@73 -- # killprocess 1227656 00:26:08.857 06:21:15 -- common/autotest_common.sh@926 -- # '[' -z 1227656 ']' 00:26:08.857 06:21:15 -- common/autotest_common.sh@930 -- # kill -0 1227656 00:26:08.857 06:21:15 -- common/autotest_common.sh@931 -- # uname 00:26:08.857 06:21:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:08.857 06:21:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1227656 00:26:08.857 06:21:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:08.857 06:21:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:08.857 06:21:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1227656' 00:26:08.857 killing process with pid 1227656 00:26:08.857 06:21:15 -- common/autotest_common.sh@945 -- # kill 1227656 00:26:08.857 Received shutdown signal, test time was about 2.000000 seconds 00:26:08.857 00:26:08.857 Latency(us) 00:26:08.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.857 =================================================================================================================== 00:26:08.857 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:08.857 06:21:15 -- common/autotest_common.sh@950 -- # wait 1227656 00:26:09.115 06:21:15 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:26:09.115 06:21:15 -- host/digest.sh@54 -- # local rw bs qd 00:26:09.115 06:21:15 -- host/digest.sh@56 -- # rw=randread 00:26:09.115 06:21:15 -- host/digest.sh@56 -- # bs=131072 00:26:09.115 06:21:15 -- host/digest.sh@56 -- # qd=16 00:26:09.115 06:21:15 -- host/digest.sh@58 -- # bperfpid=1228189 00:26:09.115 06:21:15 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:09.115 06:21:15 -- host/digest.sh@60 -- # waitforlisten 1228189 /var/tmp/bperf.sock 00:26:09.115 06:21:15 -- common/autotest_common.sh@819 -- # '[' -z 1228189 ']' 00:26:09.115 06:21:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:09.115 06:21:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:09.115 06:21:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:09.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:09.115 06:21:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:09.115 06:21:15 -- common/autotest_common.sh@10 -- # set +x 00:26:09.115 [2024-07-13 06:21:15.599326] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:09.115 [2024-07-13 06:21:15.599397] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1228189 ] 00:26:09.115 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:09.115 Zero copy mechanism will not be used. 00:26:09.373 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.373 [2024-07-13 06:21:15.661518] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.373 [2024-07-13 06:21:15.779779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.306 06:21:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:10.306 06:21:16 -- common/autotest_common.sh@852 -- # return 0 00:26:10.306 06:21:16 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:10.306 06:21:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:10.306 06:21:16 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:10.306 06:21:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:10.306 06:21:16 -- common/autotest_common.sh@10 -- # set +x 00:26:10.306 06:21:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:10.306 06:21:16 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.306 06:21:16 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:10.872 nvme0n1 00:26:10.872 06:21:17 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:10.872 06:21:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:10.872 06:21:17 -- common/autotest_common.sh@10 -- # set +x 00:26:10.872 06:21:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:10.872 06:21:17 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:10.872 06:21:17 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:10.872 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:10.872 Zero copy mechanism will not be used. 00:26:10.872 Running I/O for 2 seconds... 00:26:10.872 [2024-07-13 06:21:17.329512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:10.872 [2024-07-13 06:21:17.329566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.872 [2024-07-13 06:21:17.329588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.872 [2024-07-13 06:21:17.339056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:10.872 [2024-07-13 06:21:17.339088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.872 [2024-07-13 06:21:17.339107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:10.872 [2024-07-13 06:21:17.348468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:10.873 [2024-07-13 06:21:17.348505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.873 [2024-07-13 06:21:17.348524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:10.873 [2024-07-13 06:21:17.357725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:10.873 [2024-07-13 06:21:17.357760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.873 [2024-07-13 06:21:17.357779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:10.873 [2024-07-13 06:21:17.367314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:10.873 [2024-07-13 06:21:17.367349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.873 [2024-07-13 06:21:17.367369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:10.873 [2024-07-13 06:21:17.377206] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:10.873 [2024-07-13 06:21:17.377250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:10.873 [2024-07-13 06:21:17.377267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.131 [2024-07-13 06:21:17.387262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.131 [2024-07-13 06:21:17.387296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.131 [2024-07-13 06:21:17.387315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.131 [2024-07-13 06:21:17.396254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.131 [2024-07-13 06:21:17.396287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.131 [2024-07-13 06:21:17.396306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.131 [2024-07-13 06:21:17.405270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.131 [2024-07-13 06:21:17.405303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.131 [2024-07-13 06:21:17.405322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.131 [2024-07-13 06:21:17.414362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.131 [2024-07-13 06:21:17.414395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.131 [2024-07-13 06:21:17.414414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.131 [2024-07-13 06:21:17.423761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.423794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.423819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.432837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.432878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.432914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.441990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.442019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.442036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.451014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.451042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.451059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.459981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.460010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.460027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.469351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.469385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.469404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.478985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.479017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.479035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.488103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.488148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.488165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.497201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.497248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.497268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.506430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.506464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.506483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.515967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.516009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.516032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.525194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.525233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.525252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.534390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.534423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.534441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.543720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.543755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.543775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.553135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.553175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.553192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.562341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.562375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.562394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.571615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.571649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.571669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.581097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.581126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.581149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.590124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.590168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.590184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.599696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.599729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.599749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.608831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.608885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.608921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.618063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.618093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.618110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.627241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.627289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.627308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.132 [2024-07-13 06:21:17.636651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.132 [2024-07-13 06:21:17.636685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.132 [2024-07-13 06:21:17.636704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.645929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.645974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.645992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.654944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.654973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.654990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.663921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.663959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.663977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.673037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.673065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.673082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.682400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.682433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.682453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.691348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.691381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.691401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.700324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.700356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.700375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.709384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.709417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.709436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.718489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.718522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.718541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.727781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.727813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.727833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.736875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.736919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.736936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.746102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.746131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.746148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.755310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.755343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.755363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.764359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.764393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.764412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.773533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.773567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.773585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.782719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.782753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.782772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.792034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.792064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.792081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.801405] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.801439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.801458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.810397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.810430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.810449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.819606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.819639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.819665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.829028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.829057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.829074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.838422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.838455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.838475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.847464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.847497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.847517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.856663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.856696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.856715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.865955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.866000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.866018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.875020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.875050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.875067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.884067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.392 [2024-07-13 06:21:17.884097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.392 [2024-07-13 06:21:17.884113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.392 [2024-07-13 06:21:17.893177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.393 [2024-07-13 06:21:17.893223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.393 [2024-07-13 06:21:17.893242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:17.902586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:17.902619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:17.902639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:17.911852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:17.911892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:17.911926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:17.921154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:17.921198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:17.921215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:17.930244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:17.930277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:17.930296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:17.939297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:17.939329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:17.939348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:17.948630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:17.948664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:17.948683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:17.957825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:17.957858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:17.957887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:17.966850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:17.966891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:17.966911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:17.975998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:17.976028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:17.976051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:17.985378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:17.985414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:17.985434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:17.994760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:17.994795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:17.994814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:18.003787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:18.003821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:18.003841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:18.012765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:18.012799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:18.012818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:18.021949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:18.021994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:18.022011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:18.031209] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:18.031241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:18.031261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:18.040332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:18.040364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:18.040383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:18.049374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:18.049407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:18.049425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:18.058773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:18.058812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:18.058833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:18.067722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:18.067756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:18.067775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:18.076685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:18.076717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:18.076736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.652 [2024-07-13 06:21:18.085859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.652 [2024-07-13 06:21:18.085917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.652 [2024-07-13 06:21:18.085935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.653 [2024-07-13 06:21:18.094889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.653 [2024-07-13 06:21:18.094945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.653 [2024-07-13 06:21:18.094962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.653 [2024-07-13 06:21:18.103769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.653 [2024-07-13 06:21:18.103803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.653 [2024-07-13 06:21:18.103822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.653 [2024-07-13 06:21:18.112835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.653 [2024-07-13 06:21:18.112876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.653 [2024-07-13 06:21:18.112913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.653 [2024-07-13 06:21:18.121994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.653 [2024-07-13 06:21:18.122033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.653 [2024-07-13 06:21:18.122050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.653 [2024-07-13 06:21:18.131255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.653 [2024-07-13 06:21:18.131303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.653 [2024-07-13 06:21:18.131322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.653 [2024-07-13 06:21:18.140363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.653 [2024-07-13 06:21:18.140396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.653 [2024-07-13 06:21:18.140415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.653 [2024-07-13 06:21:18.149466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.653 [2024-07-13 06:21:18.149499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.653 [2024-07-13 06:21:18.149518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.653 [2024-07-13 06:21:18.158496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.653 [2024-07-13 06:21:18.158530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.653 [2024-07-13 06:21:18.158549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.912 [2024-07-13 06:21:18.167214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.912 [2024-07-13 06:21:18.167244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.912 [2024-07-13 06:21:18.167261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.912 [2024-07-13 06:21:18.175429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.912 [2024-07-13 06:21:18.175460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.912 [2024-07-13 06:21:18.175477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.912 [2024-07-13 06:21:18.183860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.912 [2024-07-13 06:21:18.183901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.912 [2024-07-13 06:21:18.183918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.912 [2024-07-13 06:21:18.192186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.912 [2024-07-13 06:21:18.192216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.912 [2024-07-13 06:21:18.192233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.912 [2024-07-13 06:21:18.200721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.912 [2024-07-13 06:21:18.200750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.912 [2024-07-13 06:21:18.200768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.912 [2024-07-13 06:21:18.209672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.912 [2024-07-13 06:21:18.209704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.209728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.218498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.218529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.218547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.226976] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.227022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.227040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.235576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.235622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.235639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.244109] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.244139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.244155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.252684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.252716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.252734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.260750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.260780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.260797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.269264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.269295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.269312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.277605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.277635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.277652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.286321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.286374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.286392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.294802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.294833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.294850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.303448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.303479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.303496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.311887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.311917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.311934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.320597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.320626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.320643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.329340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.329372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.329390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.337951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.337981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.337999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.346039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.346069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.346086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.354355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.354386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.354403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.362721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.362751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.362768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.370975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.371004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.371022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.378983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.379013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.379030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.387016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.387046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.387063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.395419] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.395449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.395466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.404127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.404158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.404176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.412999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.413044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.413062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:11.913 [2024-07-13 06:21:18.421625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:11.913 [2024-07-13 06:21:18.421671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.913 [2024-07-13 06:21:18.421687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.173 [2024-07-13 06:21:18.430464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.173 [2024-07-13 06:21:18.430517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.173 [2024-07-13 06:21:18.430536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.439046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.439075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.439093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.447186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.447216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.447233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.455529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.455559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.455576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.463818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.463847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.463873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.471985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.472015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.472032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.480520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.480552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.480570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.489040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.489071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.489089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.497862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.497901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.497929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.506749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.506794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.506813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.515351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.515381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.515398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.523903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.523934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.523952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.532139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.532169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.532186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.540351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.540380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.540398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.548922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.548952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.548968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.557437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.557466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.557483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.565989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.566020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.566037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.574786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.574831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.574855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.583459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.583504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.583521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.592010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.592041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.592059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.600306] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.600336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.600353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.609081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.609113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.609131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.617483] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.617528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.617545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.625783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.625812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.625829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.634270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.634316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.634333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.642674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.642702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.642718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.651012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.651049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.651067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.659276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.659320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.659337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.667734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.667763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.667779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.174 [2024-07-13 06:21:18.676088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.174 [2024-07-13 06:21:18.676120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.174 [2024-07-13 06:21:18.676138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.684368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.684399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.684432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.692924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.692956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.692973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.701233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.701263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.701280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.709820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.709851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.709877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.718266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.718296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.718313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.726679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.726710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.726727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.735054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.735085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.735102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.743612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.743657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.743674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.752593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.752623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.752641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.760893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.760922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.760940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.769225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.769268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.769285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.777637] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.777682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.777700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.785950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.785980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.785998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.794036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.794066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.794093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.802393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.802424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.802442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.810921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.810951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.810967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.819288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.819318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.819350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.827769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.827800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.827817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.835973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.836003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.836020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.844754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.844784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.844801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.853097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.853125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.853143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.861706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.861736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.861753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.870071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.870102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.870119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.878801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.878833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.878873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.434 [2024-07-13 06:21:18.888146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.434 [2024-07-13 06:21:18.888194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.434 [2024-07-13 06:21:18.888212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.435 [2024-07-13 06:21:18.898488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.435 [2024-07-13 06:21:18.898520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.435 [2024-07-13 06:21:18.898537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.435 [2024-07-13 06:21:18.908927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.435 [2024-07-13 06:21:18.908961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.435 [2024-07-13 06:21:18.908979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.435 [2024-07-13 06:21:18.919315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.435 [2024-07-13 06:21:18.919347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.435 [2024-07-13 06:21:18.919364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.435 [2024-07-13 06:21:18.929495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.435 [2024-07-13 06:21:18.929542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.435 [2024-07-13 06:21:18.929560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.435 [2024-07-13 06:21:18.939808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.435 [2024-07-13 06:21:18.939855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.435 [2024-07-13 06:21:18.939881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:18.949304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:18.949338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:18.949378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:18.959495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:18.959528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:18.959545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:18.969583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:18.969614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:18.969631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:18.979708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:18.979741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:18.979759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:18.990202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:18.990233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:18.990250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.000155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.000187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.000205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.010628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.010661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.010679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.021065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.021098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.021116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.031046] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.031077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.031096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.040485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.040523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.040541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.050355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.050390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.050410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.059837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.059880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.059902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.069746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.069781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.069800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.079221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.079256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.079275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.088432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.088464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.088483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.097827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.097862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.097895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.106905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.106935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.106952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.115845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.115889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.115924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.124843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.124885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.124920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.133959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.133989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.134007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.143270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.143304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.143323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.152282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.152315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.152334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.161371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.161414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.161434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.170474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.170506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.170525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.694 [2024-07-13 06:21:19.179619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.694 [2024-07-13 06:21:19.179652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.694 [2024-07-13 06:21:19.179670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.695 [2024-07-13 06:21:19.189138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.695 [2024-07-13 06:21:19.189186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.695 [2024-07-13 06:21:19.189205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.695 [2024-07-13 06:21:19.198464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.695 [2024-07-13 06:21:19.198498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.695 [2024-07-13 06:21:19.198523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.954 [2024-07-13 06:21:19.207605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.954 [2024-07-13 06:21:19.207638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.954 [2024-07-13 06:21:19.207657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.954 [2024-07-13 06:21:19.217024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.954 [2024-07-13 06:21:19.217054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.954 [2024-07-13 06:21:19.217071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.954 [2024-07-13 06:21:19.226053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.954 [2024-07-13 06:21:19.226082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.954 [2024-07-13 06:21:19.226099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.954 [2024-07-13 06:21:19.235258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.954 [2024-07-13 06:21:19.235292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.954 [2024-07-13 06:21:19.235312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.954 [2024-07-13 06:21:19.244379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.954 [2024-07-13 06:21:19.244411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.954 [2024-07-13 06:21:19.244430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.954 [2024-07-13 06:21:19.253340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.954 [2024-07-13 06:21:19.253373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.954 [2024-07-13 06:21:19.253392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.954 [2024-07-13 06:21:19.262414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.954 [2024-07-13 06:21:19.262447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.954 [2024-07-13 06:21:19.262467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.954 [2024-07-13 06:21:19.271638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.954 [2024-07-13 06:21:19.271672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.954 [2024-07-13 06:21:19.271691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.954 [2024-07-13 06:21:19.280852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.954 [2024-07-13 06:21:19.280914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.954 [2024-07-13 06:21:19.280934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.954 [2024-07-13 06:21:19.290117] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.954 [2024-07-13 06:21:19.290146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.954 [2024-07-13 06:21:19.290162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.954 [2024-07-13 06:21:19.299495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.954 [2024-07-13 06:21:19.299529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.954 [2024-07-13 06:21:19.299548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:12.954 [2024-07-13 06:21:19.308695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.954 [2024-07-13 06:21:19.308729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.954 [2024-07-13 06:21:19.308748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:12.954 [2024-07-13 06:21:19.317757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.954 [2024-07-13 06:21:19.317790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.954 [2024-07-13 06:21:19.317809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:12.954 [2024-07-13 06:21:19.327031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7da880) 00:26:12.954 [2024-07-13 06:21:19.327062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:12.954 [2024-07-13 06:21:19.327093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:12.954 00:26:12.954 Latency(us) 00:26:12.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.954 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:12.954 nvme0n1 : 2.00 3450.03 431.25 0.00 0.00 4632.46 3859.34 10679.94 00:26:12.954 =================================================================================================================== 00:26:12.954 Total : 3450.03 431.25 0.00 0.00 4632.46 3859.34 10679.94 00:26:12.954 0 00:26:12.954 06:21:19 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:12.954 06:21:19 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:12.954 06:21:19 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:12.954 | .driver_specific 00:26:12.954 | .nvme_error 00:26:12.954 | .status_code 00:26:12.954 | .command_transient_transport_error' 00:26:12.954 06:21:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:13.212 06:21:19 -- host/digest.sh@71 -- # (( 223 > 0 )) 00:26:13.212 06:21:19 -- host/digest.sh@73 -- # killprocess 1228189 00:26:13.212 06:21:19 -- common/autotest_common.sh@926 -- # '[' -z 1228189 ']' 00:26:13.212 06:21:19 -- common/autotest_common.sh@930 -- # kill -0 1228189 00:26:13.212 06:21:19 -- common/autotest_common.sh@931 -- # uname 00:26:13.212 06:21:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:13.212 06:21:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1228189 00:26:13.212 06:21:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:13.212 06:21:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:13.212 06:21:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1228189' 00:26:13.212 killing process with pid 1228189 00:26:13.212 06:21:19 -- common/autotest_common.sh@945 -- # kill 1228189 00:26:13.212 Received shutdown signal, test time was about 2.000000 seconds 00:26:13.212 00:26:13.212 Latency(us) 00:26:13.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.212 =================================================================================================================== 00:26:13.212 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:13.212 06:21:19 -- common/autotest_common.sh@950 -- # wait 1228189 00:26:13.470 06:21:19 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:26:13.470 06:21:19 -- host/digest.sh@54 -- # local rw bs qd 00:26:13.470 06:21:19 -- host/digest.sh@56 -- # rw=randwrite 00:26:13.470 06:21:19 -- host/digest.sh@56 -- # bs=4096 00:26:13.470 06:21:19 -- host/digest.sh@56 -- # qd=128 00:26:13.470 06:21:19 -- host/digest.sh@58 -- # bperfpid=1228740 00:26:13.470 06:21:19 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:13.470 06:21:19 -- host/digest.sh@60 -- # waitforlisten 1228740 /var/tmp/bperf.sock 00:26:13.470 06:21:19 -- common/autotest_common.sh@819 -- # '[' -z 1228740 ']' 00:26:13.470 06:21:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:13.470 06:21:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:13.470 06:21:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:13.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:13.470 06:21:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:13.470 06:21:19 -- common/autotest_common.sh@10 -- # set +x 00:26:13.470 [2024-07-13 06:21:19.916418] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:13.470 [2024-07-13 06:21:19.916499] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1228740 ] 00:26:13.470 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.470 [2024-07-13 06:21:19.979274] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.728 [2024-07-13 06:21:20.099969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.663 06:21:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:14.663 06:21:20 -- common/autotest_common.sh@852 -- # return 0 00:26:14.663 06:21:20 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:14.663 06:21:20 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:14.663 06:21:21 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:14.663 06:21:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.663 06:21:21 -- common/autotest_common.sh@10 -- # set +x 00:26:14.663 06:21:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.663 06:21:21 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.663 06:21:21 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.229 nvme0n1 00:26:15.229 06:21:21 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:15.229 06:21:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:15.229 06:21:21 -- common/autotest_common.sh@10 -- # set +x 00:26:15.229 06:21:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:15.229 06:21:21 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:15.229 06:21:21 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:15.229 Running I/O for 2 seconds... 00:26:15.229 [2024-07-13 06:21:21.599722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ee5c8 00:26:15.229 [2024-07-13 06:21:21.600178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.230 [2024-07-13 06:21:21.600220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:15.230 [2024-07-13 06:21:21.612478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f2948 00:26:15.230 [2024-07-13 06:21:21.613466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.230 [2024-07-13 06:21:21.613505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:15.230 [2024-07-13 06:21:21.624848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e38d0 00:26:15.230 [2024-07-13 06:21:21.625543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.230 [2024-07-13 06:21:21.625585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.230 [2024-07-13 06:21:21.637155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ef6a8 00:26:15.230 [2024-07-13 06:21:21.638071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.230 [2024-07-13 06:21:21.638104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:15.230 [2024-07-13 06:21:21.649405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f2d80 00:26:15.230 [2024-07-13 06:21:21.651368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.230 [2024-07-13 06:21:21.651400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:15.230 [2024-07-13 06:21:21.661812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ea680 00:26:15.230 [2024-07-13 06:21:21.663246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.230 [2024-07-13 06:21:21.663282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:15.230 [2024-07-13 06:21:21.674213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ea680 00:26:15.230 [2024-07-13 06:21:21.675674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.230 [2024-07-13 06:21:21.675710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:15.230 [2024-07-13 06:21:21.686702] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ea680 00:26:15.230 [2024-07-13 06:21:21.688137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.230 [2024-07-13 06:21:21.688184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:15.230 [2024-07-13 06:21:21.699137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f3a28 00:26:15.230 [2024-07-13 06:21:21.700652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:62 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.230 [2024-07-13 06:21:21.700688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:15.230 [2024-07-13 06:21:21.711651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ef270 00:26:15.230 [2024-07-13 06:21:21.713094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.230 [2024-07-13 06:21:21.713123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:15.230 [2024-07-13 06:21:21.724014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190eb760 00:26:15.230 [2024-07-13 06:21:21.725486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.230 [2024-07-13 06:21:21.725521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:15.230 [2024-07-13 06:21:21.736404] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ef6a8 00:26:15.230 [2024-07-13 06:21:21.737923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.230 [2024-07-13 06:21:21.737954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.749028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e5a90 00:26:15.489 [2024-07-13 06:21:21.750545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.750581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.761315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ec408 00:26:15.489 [2024-07-13 06:21:21.762863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.762919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.773689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190efae0 00:26:15.489 [2024-07-13 06:21:21.775034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.775064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.786079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190efae0 00:26:15.489 [2024-07-13 06:21:21.787454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.787489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.798512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190efae0 00:26:15.489 [2024-07-13 06:21:21.799944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.799978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.810778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f4f40 00:26:15.489 [2024-07-13 06:21:21.812192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.812228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.823191] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f57b0 00:26:15.489 [2024-07-13 06:21:21.825016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.825061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.835482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f57b0 00:26:15.489 [2024-07-13 06:21:21.836766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.836801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.847735] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f57b0 00:26:15.489 [2024-07-13 06:21:21.849091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.849120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.859952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f57b0 00:26:15.489 [2024-07-13 06:21:21.861448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.861485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.872240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f46d0 00:26:15.489 [2024-07-13 06:21:21.873838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.873881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.884108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e12d8 00:26:15.489 [2024-07-13 06:21:21.884827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.884880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.896756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f2948 00:26:15.489 [2024-07-13 06:21:21.897280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.897316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.911023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f81e0 00:26:15.489 [2024-07-13 06:21:21.911941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.911976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.923024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f0ff8 00:26:15.489 [2024-07-13 06:21:21.923942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.923988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.935236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ea248 00:26:15.489 [2024-07-13 06:21:21.936123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.936161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.947379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ef270 00:26:15.489 [2024-07-13 06:21:21.948277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.948313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.959218] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f2510 00:26:15.489 [2024-07-13 06:21:21.960150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.960182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.971424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e3d08 00:26:15.489 [2024-07-13 06:21:21.972390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.972429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.983501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f7100 00:26:15.489 [2024-07-13 06:21:21.985499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.985539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:15.489 [2024-07-13 06:21:21.995883] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e6738 00:26:15.489 [2024-07-13 06:21:21.997421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.489 [2024-07-13 06:21:21.997457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:15.750 [2024-07-13 06:21:22.008440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e0630 00:26:15.750 [2024-07-13 06:21:22.010033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.750 [2024-07-13 06:21:22.010065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:15.750 [2024-07-13 06:21:22.020938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e01f8 00:26:15.750 [2024-07-13 06:21:22.022535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.750 [2024-07-13 06:21:22.022572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:15.750 [2024-07-13 06:21:22.033373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f7da8 00:26:15.750 [2024-07-13 06:21:22.035000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.750 [2024-07-13 06:21:22.035029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.750 [2024-07-13 06:21:22.045718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f6890 00:26:15.750 [2024-07-13 06:21:22.047383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.750 [2024-07-13 06:21:22.047418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:15.750 [2024-07-13 06:21:22.058019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e8d30 00:26:15.750 [2024-07-13 06:21:22.059676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.750 [2024-07-13 06:21:22.059712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:15.750 [2024-07-13 06:21:22.070369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e23b8 00:26:15.750 [2024-07-13 06:21:22.072028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.750 [2024-07-13 06:21:22.072057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:15.750 [2024-07-13 06:21:22.082731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e0630 00:26:15.750 [2024-07-13 06:21:22.084390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.750 [2024-07-13 06:21:22.084425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.750 [2024-07-13 06:21:22.094965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e01f8 00:26:15.750 [2024-07-13 06:21:22.096600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.750 [2024-07-13 06:21:22.096636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:15.750 [2024-07-13 06:21:22.107272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e6b70 00:26:15.750 [2024-07-13 06:21:22.108973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.750 [2024-07-13 06:21:22.109002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.750 [2024-07-13 06:21:22.119605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190eff18 00:26:15.750 [2024-07-13 06:21:22.121295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.750 [2024-07-13 06:21:22.121335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.750 [2024-07-13 06:21:22.131891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190fb480 00:26:15.750 [2024-07-13 06:21:22.133553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.750 [2024-07-13 06:21:22.133588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:15.750 [2024-07-13 06:21:22.144343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f9f68 00:26:15.750 [2024-07-13 06:21:22.146038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.750 [2024-07-13 06:21:22.146069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:15.750 [2024-07-13 06:21:22.156757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f20d8 00:26:15.750 [2024-07-13 06:21:22.158429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.751 [2024-07-13 06:21:22.158465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:15.751 [2024-07-13 06:21:22.169289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e6738 00:26:15.751 [2024-07-13 06:21:22.170660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.751 [2024-07-13 06:21:22.170696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:15.751 [2024-07-13 06:21:22.181687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e6738 00:26:15.751 [2024-07-13 06:21:22.183207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.751 [2024-07-13 06:21:22.183252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:15.751 [2024-07-13 06:21:22.194204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190fa7d8 00:26:15.751 [2024-07-13 06:21:22.195739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.751 [2024-07-13 06:21:22.195774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:15.751 [2024-07-13 06:21:22.206649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f2510 00:26:15.751 [2024-07-13 06:21:22.208227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.751 [2024-07-13 06:21:22.208262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:15.751 [2024-07-13 06:21:22.219152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f0350 00:26:15.751 [2024-07-13 06:21:22.220738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.751 [2024-07-13 06:21:22.220773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.751 [2024-07-13 06:21:22.230224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ef6a8 00:26:15.751 [2024-07-13 06:21:22.231371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.751 [2024-07-13 06:21:22.231406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:15.751 [2024-07-13 06:21:22.242668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f1ca0 00:26:15.751 [2024-07-13 06:21:22.243783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.751 [2024-07-13 06:21:22.243819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:15.751 [2024-07-13 06:21:22.255094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190fa3a0 00:26:15.751 [2024-07-13 06:21:22.256265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:15.751 [2024-07-13 06:21:22.256300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:16.009 [2024-07-13 06:21:22.267794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190fa3a0 00:26:16.009 [2024-07-13 06:21:22.268961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.009 [2024-07-13 06:21:22.268990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:16.009 [2024-07-13 06:21:22.280210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190fa3a0 00:26:16.009 [2024-07-13 06:21:22.281384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.009 [2024-07-13 06:21:22.281419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.009 [2024-07-13 06:21:22.292730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190fa3a0 00:26:16.009 [2024-07-13 06:21:22.293936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.009 [2024-07-13 06:21:22.293966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:16.009 [2024-07-13 06:21:22.305115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190fa3a0 00:26:16.009 [2024-07-13 06:21:22.306344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.009 [2024-07-13 06:21:22.306380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:16.009 [2024-07-13 06:21:22.317540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190fa3a0 00:26:16.009 [2024-07-13 06:21:22.318783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.009 [2024-07-13 06:21:22.318817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:16.009 [2024-07-13 06:21:22.329969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190fa3a0 00:26:16.009 [2024-07-13 06:21:22.331155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.009 [2024-07-13 06:21:22.331204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:16.009 [2024-07-13 06:21:22.342323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190fa3a0 00:26:16.009 [2024-07-13 06:21:22.343577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.009 [2024-07-13 06:21:22.343612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:16.009 [2024-07-13 06:21:22.354765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190fa3a0 00:26:16.009 [2024-07-13 06:21:22.356004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.009 [2024-07-13 06:21:22.356032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:16.009 [2024-07-13 06:21:22.367109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190fa3a0 00:26:16.009 [2024-07-13 06:21:22.368382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.009 [2024-07-13 06:21:22.368417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:16.009 [2024-07-13 06:21:22.379465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190fa3a0 00:26:16.009 [2024-07-13 06:21:22.380735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.009 [2024-07-13 06:21:22.380770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:16.009 [2024-07-13 06:21:22.391791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e8d30 00:26:16.009 [2024-07-13 06:21:22.393051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.009 [2024-07-13 06:21:22.393080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:16.009 [2024-07-13 06:21:22.404106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e8d30 00:26:16.010 [2024-07-13 06:21:22.405401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.010 [2024-07-13 06:21:22.405437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:16.010 [2024-07-13 06:21:22.416406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e8d30 00:26:16.010 [2024-07-13 06:21:22.417719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.010 [2024-07-13 06:21:22.417754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:16.010 [2024-07-13 06:21:22.428734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e8d30 00:26:16.010 [2024-07-13 06:21:22.430032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.010 [2024-07-13 06:21:22.430061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:16.010 [2024-07-13 06:21:22.440954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e8d30 00:26:16.010 [2024-07-13 06:21:22.442320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.010 [2024-07-13 06:21:22.442363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:16.010 [2024-07-13 06:21:22.453888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e73e0 00:26:16.010 [2024-07-13 06:21:22.455232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.010 [2024-07-13 06:21:22.455268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:16.010 [2024-07-13 06:21:22.466389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.010 [2024-07-13 06:21:22.467749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.010 [2024-07-13 06:21:22.467792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.010 [2024-07-13 06:21:22.479228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f0bc0 00:26:16.010 [2024-07-13 06:21:22.480234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.010 [2024-07-13 06:21:22.480270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:16.010 [2024-07-13 06:21:22.491856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f7970 00:26:16.010 [2024-07-13 06:21:22.492717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.010 [2024-07-13 06:21:22.492760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:16.010 [2024-07-13 06:21:22.504266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ea680 00:26:16.010 [2024-07-13 06:21:22.505787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.010 [2024-07-13 06:21:22.505830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:16.010 [2024-07-13 06:21:22.516652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e3498 00:26:16.010 [2024-07-13 06:21:22.518489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.010 [2024-07-13 06:21:22.518533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:16.268 [2024-07-13 06:21:22.529408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190fb048 00:26:16.268 [2024-07-13 06:21:22.530972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.268 [2024-07-13 06:21:22.531015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:16.268 [2024-07-13 06:21:22.541924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e6300 00:26:16.268 [2024-07-13 06:21:22.543429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.268 [2024-07-13 06:21:22.543464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.268 [2024-07-13 06:21:22.554284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f31b8 00:26:16.268 [2024-07-13 06:21:22.555756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.268 [2024-07-13 06:21:22.555797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:16.268 [2024-07-13 06:21:22.566514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e3498 00:26:16.268 [2024-07-13 06:21:22.568071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.268 [2024-07-13 06:21:22.568103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:16.268 [2024-07-13 06:21:22.578941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f8618 00:26:16.268 [2024-07-13 06:21:22.580502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.268 [2024-07-13 06:21:22.580538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:16.268 [2024-07-13 06:21:22.591515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f1ca0 00:26:16.268 [2024-07-13 06:21:22.593103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.268 [2024-07-13 06:21:22.593133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:16.268 [2024-07-13 06:21:22.603912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ee5c8 00:26:16.268 [2024-07-13 06:21:22.605504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.268 [2024-07-13 06:21:22.605540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:16.268 [2024-07-13 06:21:22.616354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ebb98 00:26:16.268 [2024-07-13 06:21:22.617949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.268 [2024-07-13 06:21:22.617980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:16.268 [2024-07-13 06:21:22.628751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e99d8 00:26:16.268 [2024-07-13 06:21:22.630412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.268 [2024-07-13 06:21:22.630448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:16.268 [2024-07-13 06:21:22.641145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e4578 00:26:16.268 [2024-07-13 06:21:22.642828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.268 [2024-07-13 06:21:22.642881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:16.268 [2024-07-13 06:21:22.653737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e4578 00:26:16.268 [2024-07-13 06:21:22.655435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.268 [2024-07-13 06:21:22.655473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:16.269 [2024-07-13 06:21:22.666221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e99d8 00:26:16.269 [2024-07-13 06:21:22.667595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.269 [2024-07-13 06:21:22.667631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:16.269 [2024-07-13 06:21:22.678630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e99d8 00:26:16.269 [2024-07-13 06:21:22.680165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.269 [2024-07-13 06:21:22.680199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:16.269 [2024-07-13 06:21:22.691055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e99d8 00:26:16.269 [2024-07-13 06:21:22.692556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.269 [2024-07-13 06:21:22.692591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:16.269 [2024-07-13 06:21:22.703412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f3e60 00:26:16.269 [2024-07-13 06:21:22.704893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.269 [2024-07-13 06:21:22.704939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.269 [2024-07-13 06:21:22.715632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e84c0 00:26:16.269 [2024-07-13 06:21:22.717553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:21721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.269 [2024-07-13 06:21:22.717587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.269 [2024-07-13 06:21:22.726788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e6738 00:26:16.269 [2024-07-13 06:21:22.728032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.269 [2024-07-13 06:21:22.728077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:16.269 [2024-07-13 06:21:22.739120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e6738 00:26:16.269 [2024-07-13 06:21:22.740381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.269 [2024-07-13 06:21:22.740415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:16.269 [2024-07-13 06:21:22.751330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e6738 00:26:16.269 [2024-07-13 06:21:22.752612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.269 [2024-07-13 06:21:22.752646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:16.269 [2024-07-13 06:21:22.764154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e84c0 00:26:16.269 [2024-07-13 06:21:22.765355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.269 [2024-07-13 06:21:22.765389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:16.269 [2024-07-13 06:21:22.776484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f7970 00:26:16.269 [2024-07-13 06:21:22.777781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.269 [2024-07-13 06:21:22.777815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:22.789037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f3e60 00:26:16.526 [2024-07-13 06:21:22.790283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:22.790325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:22.801709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.526 [2024-07-13 06:21:22.802649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:22.802687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:22.814275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.526 [2024-07-13 06:21:22.815532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:22.815566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:22.826658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.526 [2024-07-13 06:21:22.827899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:22.827948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:22.839139] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.526 [2024-07-13 06:21:22.840409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:22.840443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:22.851567] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.526 [2024-07-13 06:21:22.852897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:22.852947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:22.864088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.526 [2024-07-13 06:21:22.865398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:22.865432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:22.876561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.526 [2024-07-13 06:21:22.877894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:22.877946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:22.889090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.526 [2024-07-13 06:21:22.890414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:22.890448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:22.901520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.526 [2024-07-13 06:21:22.902915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:22.902961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:22.914087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.526 [2024-07-13 06:21:22.915461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:22.915495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:22.926466] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.526 [2024-07-13 06:21:22.927862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:22.927905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:22.939004] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.526 [2024-07-13 06:21:22.940408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:22.940442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:22.951449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.526 [2024-07-13 06:21:22.952837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:22.952880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:22.963938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.526 [2024-07-13 06:21:22.965380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:22.965415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:22.976330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.526 [2024-07-13 06:21:22.977764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:22.977798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:22.988703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.526 [2024-07-13 06:21:22.990210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:22.990244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:23.001291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.526 [2024-07-13 06:21:23.002760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:23.002794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:23.013658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e1b48 00:26:16.526 [2024-07-13 06:21:23.015162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:23.015209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:16.526 [2024-07-13 06:21:23.026157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e6b70 00:26:16.526 [2024-07-13 06:21:23.027686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.526 [2024-07-13 06:21:23.027721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:16.784 [2024-07-13 06:21:23.038886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f8a50 00:26:16.784 [2024-07-13 06:21:23.040402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.784 [2024-07-13 06:21:23.040437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:16.784 [2024-07-13 06:21:23.051279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f8e88 00:26:16.784 [2024-07-13 06:21:23.052823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.784 [2024-07-13 06:21:23.052857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:16.784 [2024-07-13 06:21:23.063603] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f9b30 00:26:16.784 [2024-07-13 06:21:23.065199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.784 [2024-07-13 06:21:23.065233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:16.784 [2024-07-13 06:21:23.075999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e5658 00:26:16.784 [2024-07-13 06:21:23.077627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.784 [2024-07-13 06:21:23.077661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:16.784 [2024-07-13 06:21:23.088341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e73e0 00:26:16.784 [2024-07-13 06:21:23.089996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.784 [2024-07-13 06:21:23.090024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.784 [2024-07-13 06:21:23.100625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190fa3a0 00:26:16.784 [2024-07-13 06:21:23.102234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.784 [2024-07-13 06:21:23.102269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.784 [2024-07-13 06:21:23.112353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e3d08 00:26:16.784 [2024-07-13 06:21:23.113980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.784 [2024-07-13 06:21:23.114015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:16.784 [2024-07-13 06:21:23.125406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f9f68 00:26:16.784 [2024-07-13 06:21:23.126590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.784 [2024-07-13 06:21:23.126627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:16.784 [2024-07-13 06:21:23.137965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f96f8 00:26:16.784 [2024-07-13 06:21:23.139380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.784 [2024-07-13 06:21:23.139414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:16.784 [2024-07-13 06:21:23.150309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e2c28 00:26:16.784 [2024-07-13 06:21:23.151757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.784 [2024-07-13 06:21:23.151791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:16.784 [2024-07-13 06:21:23.162772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190fb048 00:26:16.784 [2024-07-13 06:21:23.164237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.784 [2024-07-13 06:21:23.164271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:16.784 [2024-07-13 06:21:23.175329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f20d8 00:26:16.784 [2024-07-13 06:21:23.176781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.784 [2024-07-13 06:21:23.176816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:16.784 [2024-07-13 06:21:23.187657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e27f0 00:26:16.784 [2024-07-13 06:21:23.189144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.784 [2024-07-13 06:21:23.189174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:16.784 [2024-07-13 06:21:23.200118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e27f0 00:26:16.784 [2024-07-13 06:21:23.201616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.784 [2024-07-13 06:21:23.201656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:16.784 [2024-07-13 06:21:23.212473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f20d8 00:26:16.785 [2024-07-13 06:21:23.214029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.785 [2024-07-13 06:21:23.214063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:16.785 [2024-07-13 06:21:23.224926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e4de8 00:26:16.785 [2024-07-13 06:21:23.226470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.785 [2024-07-13 06:21:23.226504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.785 [2024-07-13 06:21:23.237393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e6fa8 00:26:16.785 [2024-07-13 06:21:23.238932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.785 [2024-07-13 06:21:23.238966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.785 [2024-07-13 06:21:23.248194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ea680 00:26:16.785 [2024-07-13 06:21:23.249196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.785 [2024-07-13 06:21:23.249230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:16.785 [2024-07-13 06:21:23.260663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ee5c8 00:26:16.785 [2024-07-13 06:21:23.261681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.785 [2024-07-13 06:21:23.261716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:16.785 [2024-07-13 06:21:23.272996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ee5c8 00:26:16.785 [2024-07-13 06:21:23.274057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.785 [2024-07-13 06:21:23.274100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:16.785 [2024-07-13 06:21:23.285431] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ea680 00:26:16.785 [2024-07-13 06:21:23.286501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.785 [2024-07-13 06:21:23.286535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.298104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ebfd0 00:26:17.043 [2024-07-13 06:21:23.299211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.299246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.310639] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f31b8 00:26:17.043 [2024-07-13 06:21:23.311755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.311789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.323103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f31b8 00:26:17.043 [2024-07-13 06:21:23.324207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.324241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.335564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f31b8 00:26:17.043 [2024-07-13 06:21:23.336703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:11715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.336737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.348042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f31b8 00:26:17.043 [2024-07-13 06:21:23.349216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:20413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.349244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.360495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f31b8 00:26:17.043 [2024-07-13 06:21:23.361704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:14451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.361738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.372936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f31b8 00:26:17.043 [2024-07-13 06:21:23.374144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.374173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.385433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f31b8 00:26:17.043 [2024-07-13 06:21:23.386648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.386681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.397800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f31b8 00:26:17.043 [2024-07-13 06:21:23.399037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.399067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.410154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f31b8 00:26:17.043 [2024-07-13 06:21:23.411358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.411392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.422622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f31b8 00:26:17.043 [2024-07-13 06:21:23.423939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.423968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.435117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f31b8 00:26:17.043 [2024-07-13 06:21:23.436368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.436401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.447451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f31b8 00:26:17.043 [2024-07-13 06:21:23.448705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.448739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.459846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f31b8 00:26:17.043 [2024-07-13 06:21:23.461152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.461197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.472972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f92c0 00:26:17.043 [2024-07-13 06:21:23.475390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.475448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.486165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190eb760 00:26:17.043 [2024-07-13 06:21:23.487984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.488028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.498658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e2c28 00:26:17.043 [2024-07-13 06:21:23.499948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.499985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.509832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190f46d0 00:26:17.043 [2024-07-13 06:21:23.511861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.511927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.521177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e4de8 00:26:17.043 [2024-07-13 06:21:23.522501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.522549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.535942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190e6738 00:26:17.043 [2024-07-13 06:21:23.536853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.536895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:17.043 [2024-07-13 06:21:23.546533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190ebfd0 00:26:17.043 [2024-07-13 06:21:23.548368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.043 [2024-07-13 06:21:23.548403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:17.300 [2024-07-13 06:21:23.558587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190fbcf0 00:26:17.300 [2024-07-13 06:21:23.559790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.300 [2024-07-13 06:21:23.559825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:17.300 [2024-07-13 06:21:23.570994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190df118 00:26:17.300 [2024-07-13 06:21:23.572170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.300 [2024-07-13 06:21:23.572212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:17.300 [2024-07-13 06:21:23.583208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x85d1a0) with pdu=0x2000190de038 00:26:17.300 [2024-07-13 06:21:23.584429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:17.300 [2024-07-13 06:21:23.584472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:17.300 00:26:17.301 Latency(us) 00:26:17.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.301 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:17.301 nvme0n1 : 2.00 20588.64 80.42 0.00 0.00 6208.93 2779.21 14951.92 00:26:17.301 =================================================================================================================== 00:26:17.301 Total : 20588.64 80.42 0.00 0.00 6208.93 2779.21 14951.92 00:26:17.301 0 00:26:17.301 06:21:23 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:17.301 06:21:23 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:17.301 06:21:23 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:17.301 | .driver_specific 00:26:17.301 | .nvme_error 00:26:17.301 | .status_code 00:26:17.301 | .command_transient_transport_error' 00:26:17.301 06:21:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:17.558 06:21:23 -- host/digest.sh@71 -- # (( 161 > 0 )) 00:26:17.558 06:21:23 -- host/digest.sh@73 -- # killprocess 1228740 00:26:17.558 06:21:23 -- common/autotest_common.sh@926 -- # '[' -z 1228740 ']' 00:26:17.558 06:21:23 -- common/autotest_common.sh@930 -- # kill -0 1228740 00:26:17.558 06:21:23 -- common/autotest_common.sh@931 -- # uname 00:26:17.558 06:21:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:17.558 06:21:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1228740 00:26:17.558 06:21:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:17.558 06:21:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:17.558 06:21:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1228740' 00:26:17.558 killing process with pid 1228740 00:26:17.558 06:21:23 -- common/autotest_common.sh@945 -- # kill 1228740 00:26:17.558 Received shutdown signal, test time was about 2.000000 seconds 00:26:17.558 00:26:17.558 Latency(us) 00:26:17.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.558 =================================================================================================================== 00:26:17.558 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.558 06:21:23 -- common/autotest_common.sh@950 -- # wait 1228740 00:26:17.816 06:21:24 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:26:17.816 06:21:24 -- host/digest.sh@54 -- # local rw bs qd 00:26:17.816 06:21:24 -- host/digest.sh@56 -- # rw=randwrite 00:26:17.816 06:21:24 -- host/digest.sh@56 -- # bs=131072 00:26:17.816 06:21:24 -- host/digest.sh@56 -- # qd=16 00:26:17.816 06:21:24 -- host/digest.sh@58 -- # bperfpid=1229291 00:26:17.816 06:21:24 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:17.816 06:21:24 -- host/digest.sh@60 -- # waitforlisten 1229291 /var/tmp/bperf.sock 00:26:17.816 06:21:24 -- common/autotest_common.sh@819 -- # '[' -z 1229291 ']' 00:26:17.816 06:21:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:17.816 06:21:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:17.816 06:21:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:17.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:17.816 06:21:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:17.816 06:21:24 -- common/autotest_common.sh@10 -- # set +x 00:26:17.816 [2024-07-13 06:21:24.206425] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:17.816 [2024-07-13 06:21:24.206508] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1229291 ] 00:26:17.816 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:17.816 Zero copy mechanism will not be used. 00:26:17.816 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.816 [2024-07-13 06:21:24.263727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.073 [2024-07-13 06:21:24.369509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.005 06:21:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:19.005 06:21:25 -- common/autotest_common.sh@852 -- # return 0 00:26:19.005 06:21:25 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:19.005 06:21:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:19.005 06:21:25 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:19.005 06:21:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:19.005 06:21:25 -- common/autotest_common.sh@10 -- # set +x 00:26:19.005 06:21:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:19.005 06:21:25 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.005 06:21:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.571 nvme0n1 00:26:19.571 06:21:25 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:19.571 06:21:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:19.571 06:21:25 -- common/autotest_common.sh@10 -- # set +x 00:26:19.571 06:21:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:19.571 06:21:25 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:19.571 06:21:25 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:19.571 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:19.571 Zero copy mechanism will not be used. 00:26:19.571 Running I/O for 2 seconds... 00:26:19.571 [2024-07-13 06:21:25.903583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.571 [2024-07-13 06:21:25.903770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-07-13 06:21:25.903810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.571 [2024-07-13 06:21:25.912681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.571 [2024-07-13 06:21:25.912936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-07-13 06:21:25.912966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.571 [2024-07-13 06:21:25.923222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.571 [2024-07-13 06:21:25.923521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-07-13 06:21:25.923554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.571 [2024-07-13 06:21:25.934112] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.571 [2024-07-13 06:21:25.934427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.571 [2024-07-13 06:21:25.934460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.572 [2024-07-13 06:21:25.944097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.572 [2024-07-13 06:21:25.944436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.572 [2024-07-13 06:21:25.944468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.572 [2024-07-13 06:21:25.954724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.572 [2024-07-13 06:21:25.955051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.572 [2024-07-13 06:21:25.955080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.572 [2024-07-13 06:21:25.966009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.572 [2024-07-13 06:21:25.966289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.572 [2024-07-13 06:21:25.966321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.572 [2024-07-13 06:21:25.976121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.572 [2024-07-13 06:21:25.976486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.572 [2024-07-13 06:21:25.976518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.572 [2024-07-13 06:21:25.986012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.572 [2024-07-13 06:21:25.986374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.572 [2024-07-13 06:21:25.986406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.572 [2024-07-13 06:21:25.996435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.572 [2024-07-13 06:21:25.996686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.572 [2024-07-13 06:21:25.996717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.572 [2024-07-13 06:21:26.006593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.572 [2024-07-13 06:21:26.006900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.572 [2024-07-13 06:21:26.006947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.572 [2024-07-13 06:21:26.016494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.572 [2024-07-13 06:21:26.016757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.572 [2024-07-13 06:21:26.016789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.572 [2024-07-13 06:21:26.026388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.572 [2024-07-13 06:21:26.026606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.572 [2024-07-13 06:21:26.026634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.572 [2024-07-13 06:21:26.035806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.572 [2024-07-13 06:21:26.036084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.572 [2024-07-13 06:21:26.036112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.572 [2024-07-13 06:21:26.045536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.572 [2024-07-13 06:21:26.045786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.572 [2024-07-13 06:21:26.045815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.572 [2024-07-13 06:21:26.054861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.572 [2024-07-13 06:21:26.055204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.572 [2024-07-13 06:21:26.055233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.572 [2024-07-13 06:21:26.064473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.572 [2024-07-13 06:21:26.064707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.572 [2024-07-13 06:21:26.064740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.572 [2024-07-13 06:21:26.073131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.572 [2024-07-13 06:21:26.073472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.572 [2024-07-13 06:21:26.073501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.830 [2024-07-13 06:21:26.083750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.830 [2024-07-13 06:21:26.084070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.830 [2024-07-13 06:21:26.084098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.830 [2024-07-13 06:21:26.093431] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.830 [2024-07-13 06:21:26.093639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.830 [2024-07-13 06:21:26.093666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.830 [2024-07-13 06:21:26.102471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.102808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.102836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.112101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.112351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.112379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.121331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.121677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.121705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.130777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.131053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.131082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.139725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.140166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.140194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.149520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.149791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.149819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.157872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.158069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.158097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.167253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.167526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.167553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.176565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.176899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.176927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.186619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.186901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.186929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.196515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.196783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.196812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.206487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.206686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.206714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.215772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.216102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.216131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.225577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.225770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.225798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.235282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.235506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.235534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.245055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.245354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.245383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.254274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.254546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.254574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.263418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.263691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.263719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.272596] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.272813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.272841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.281754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.281978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.282007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.290420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.831 [2024-07-13 06:21:26.290669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.831 [2024-07-13 06:21:26.290698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.831 [2024-07-13 06:21:26.299120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.832 [2024-07-13 06:21:26.299367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.832 [2024-07-13 06:21:26.299396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.832 [2024-07-13 06:21:26.308891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.832 [2024-07-13 06:21:26.309155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.832 [2024-07-13 06:21:26.309191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.832 [2024-07-13 06:21:26.319144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.832 [2024-07-13 06:21:26.319387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.832 [2024-07-13 06:21:26.319415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.832 [2024-07-13 06:21:26.329096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.832 [2024-07-13 06:21:26.329342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.832 [2024-07-13 06:21:26.329369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.832 [2024-07-13 06:21:26.338438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:19.832 [2024-07-13 06:21:26.338668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.832 [2024-07-13 06:21:26.338696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.349042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.349319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.349347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.358283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.358605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.358633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.368617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.368948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.368976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.377745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.377938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.377967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.386981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.387102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.387130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.394368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.394649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.394677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.404075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.404311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.404340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.413600] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.413805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.413832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.422192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.422400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.422428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.432057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.432279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.432307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.441933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.442313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.442342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.451851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.452096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.452124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.461647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.461879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.461907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.471098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.471280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.471308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.480471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.480741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.480769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.489882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.490044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.490071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.498769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.498959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.498988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.508657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.508945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.508973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.518037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.090 [2024-07-13 06:21:26.518357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.090 [2024-07-13 06:21:26.518385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.090 [2024-07-13 06:21:26.527522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.091 [2024-07-13 06:21:26.527837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.091 [2024-07-13 06:21:26.527872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.091 [2024-07-13 06:21:26.537525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.091 [2024-07-13 06:21:26.537771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.091 [2024-07-13 06:21:26.537799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.091 [2024-07-13 06:21:26.547658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.091 [2024-07-13 06:21:26.547895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.091 [2024-07-13 06:21:26.547923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.091 [2024-07-13 06:21:26.557765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.091 [2024-07-13 06:21:26.558085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.091 [2024-07-13 06:21:26.558119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.091 [2024-07-13 06:21:26.567421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.091 [2024-07-13 06:21:26.567626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.091 [2024-07-13 06:21:26.567654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.091 [2024-07-13 06:21:26.577134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.091 [2024-07-13 06:21:26.577296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.091 [2024-07-13 06:21:26.577323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.091 [2024-07-13 06:21:26.587618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.091 [2024-07-13 06:21:26.587945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.091 [2024-07-13 06:21:26.587975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.091 [2024-07-13 06:21:26.597199] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.091 [2024-07-13 06:21:26.597557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.091 [2024-07-13 06:21:26.597586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.606038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.606204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.606232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.614382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.614591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.614619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.623662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.623906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.623935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.633381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.633682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.633710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.642192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.642385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.642412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.651490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.651718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.651746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.661328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.661610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.661638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.671179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.671558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.671587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.680287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.680609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.680637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.690330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.690574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.690603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.699206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.699354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.699382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.708338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.708538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.708566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.717525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.717717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.717745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.726554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.726764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.726792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.735560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.735688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.735716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.744190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.744558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.744587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.752331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.752556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.752584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.761310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.761478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.761507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.769842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.770043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.770070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.778580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.778910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.778938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.788651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.788884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.788917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.798787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.799135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.799168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.809116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.809425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.809452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.818958] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.819331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.819359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.828212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.828426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.828455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.839288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.839577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.839606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.847832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.848072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.848100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.349 [2024-07-13 06:21:26.856910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.349 [2024-07-13 06:21:26.857182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.349 [2024-07-13 06:21:26.857210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.607 [2024-07-13 06:21:26.866887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.607 [2024-07-13 06:21:26.867180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.607 [2024-07-13 06:21:26.867208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.607 [2024-07-13 06:21:26.876378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.607 [2024-07-13 06:21:26.876688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.607 [2024-07-13 06:21:26.876716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.607 [2024-07-13 06:21:26.886080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.607 [2024-07-13 06:21:26.886275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.607 [2024-07-13 06:21:26.886303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.607 [2024-07-13 06:21:26.895169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.607 [2024-07-13 06:21:26.895445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.607 [2024-07-13 06:21:26.895472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.607 [2024-07-13 06:21:26.904238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.607 [2024-07-13 06:21:26.904412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.607 [2024-07-13 06:21:26.904438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.607 [2024-07-13 06:21:26.913370] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.607 [2024-07-13 06:21:26.913622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:26.913650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:26.923117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:26.923394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:26.923421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:26.932013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:26.932248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:26.932276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:26.941975] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:26.942132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:26.942160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:26.951474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:26.951735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:26.951762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:26.960833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:26.961113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:26.961141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:26.970570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:26.970922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:26.970951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:26.979935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:26.980163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:26.980191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:26.988499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:26.988710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:26.988738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:26.997633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:26.997854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:26.997890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:27.006715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:27.006949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:27.006977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:27.016358] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:27.016592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:27.016619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:27.025816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:27.026138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:27.026166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:27.035342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:27.035586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:27.035614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:27.045774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:27.046059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:27.046092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:27.054175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:27.054465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:27.054493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:27.064327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:27.064587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:27.064616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:27.073295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:27.073497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:27.073527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:27.082492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:27.082701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:27.082731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:27.093090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:27.093347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:27.093376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:27.102515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:27.102786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:27.102814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.608 [2024-07-13 06:21:27.110620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.608 [2024-07-13 06:21:27.110820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.608 [2024-07-13 06:21:27.110848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.866 [2024-07-13 06:21:27.120829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.866 [2024-07-13 06:21:27.121055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.866 [2024-07-13 06:21:27.121082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.866 [2024-07-13 06:21:27.129710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.866 [2024-07-13 06:21:27.129997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.866 [2024-07-13 06:21:27.130026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.866 [2024-07-13 06:21:27.139622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.866 [2024-07-13 06:21:27.139794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.866 [2024-07-13 06:21:27.139823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.866 [2024-07-13 06:21:27.149037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.866 [2024-07-13 06:21:27.149295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.866 [2024-07-13 06:21:27.149325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.866 [2024-07-13 06:21:27.158326] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.866 [2024-07-13 06:21:27.158601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.158629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.167950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.168257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.168284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.176874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.177107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.177135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.185442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.185718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.185747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.194499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.194721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.194749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.203430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.203715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.203744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.212050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.212232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.212260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.221093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.221368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.221397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.229976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.230245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.230273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.238945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.239095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.239123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.247794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.248075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.248103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.257802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.257954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.257983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.267204] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.267506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.267534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.277546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.277916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.277945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.287386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.287631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.287665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.297662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.297909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.297938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.307122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.307377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.307406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.315979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.316173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.316201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.326580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.326808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.326837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.335798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.335978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.336007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.346123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.346394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.346422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.867 [2024-07-13 06:21:27.355544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.867 [2024-07-13 06:21:27.355906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.867 [2024-07-13 06:21:27.355935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.868 [2024-07-13 06:21:27.365627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.868 [2024-07-13 06:21:27.365904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.868 [2024-07-13 06:21:27.365932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.868 [2024-07-13 06:21:27.374999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:20.868 [2024-07-13 06:21:27.375293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.868 [2024-07-13 06:21:27.375322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.126 [2024-07-13 06:21:27.384587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.126 [2024-07-13 06:21:27.384825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.126 [2024-07-13 06:21:27.384853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.126 [2024-07-13 06:21:27.394067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.126 [2024-07-13 06:21:27.394375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.126 [2024-07-13 06:21:27.394402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.126 [2024-07-13 06:21:27.403950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.126 [2024-07-13 06:21:27.404215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.126 [2024-07-13 06:21:27.404243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.126 [2024-07-13 06:21:27.413919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.126 [2024-07-13 06:21:27.414186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.414214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.424163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.424445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.424472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.432468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.432743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.432770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.443037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.443333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.443361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.452474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.452790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.452823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.460921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.461174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.461202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.469809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.470074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.470102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.479389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.479665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.479692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.489226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.489520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.489548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.498416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.498596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.498623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.506942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.507250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.507279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.516498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.516757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.516784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.525906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.526221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.526249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.535641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.535929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.535958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.544885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.545195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.545223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.554656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.554892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.554919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.564728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.564928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.564956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.572746] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.573071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.573099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.582684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.582972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.583000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.592324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.592521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.592549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.602275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.602627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.602656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.612040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.127 [2024-07-13 06:21:27.612355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.127 [2024-07-13 06:21:27.612382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.127 [2024-07-13 06:21:27.622079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.128 [2024-07-13 06:21:27.622355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.128 [2024-07-13 06:21:27.622383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.128 [2024-07-13 06:21:27.629887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.128 [2024-07-13 06:21:27.630075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.128 [2024-07-13 06:21:27.630103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.386 [2024-07-13 06:21:27.639399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.386 [2024-07-13 06:21:27.639594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.386 [2024-07-13 06:21:27.639622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.386 [2024-07-13 06:21:27.648147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.386 [2024-07-13 06:21:27.648277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.386 [2024-07-13 06:21:27.648304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.386 [2024-07-13 06:21:27.657125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.386 [2024-07-13 06:21:27.657445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.386 [2024-07-13 06:21:27.657473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.386 [2024-07-13 06:21:27.666465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.386 [2024-07-13 06:21:27.666790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.386 [2024-07-13 06:21:27.666818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.386 [2024-07-13 06:21:27.675835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.386 [2024-07-13 06:21:27.676146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.386 [2024-07-13 06:21:27.676175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.386 [2024-07-13 06:21:27.685246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.386 [2024-07-13 06:21:27.685482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.386 [2024-07-13 06:21:27.685509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.386 [2024-07-13 06:21:27.694226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.386 [2024-07-13 06:21:27.694421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.386 [2024-07-13 06:21:27.694453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.386 [2024-07-13 06:21:27.704041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.386 [2024-07-13 06:21:27.704308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.386 [2024-07-13 06:21:27.704336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.386 [2024-07-13 06:21:27.713784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.386 [2024-07-13 06:21:27.714109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.386 [2024-07-13 06:21:27.714138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.386 [2024-07-13 06:21:27.721784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.386 [2024-07-13 06:21:27.722057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.386 [2024-07-13 06:21:27.722085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.386 [2024-07-13 06:21:27.731351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.386 [2024-07-13 06:21:27.731614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.386 [2024-07-13 06:21:27.731641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.386 [2024-07-13 06:21:27.741034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.386 [2024-07-13 06:21:27.741321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.386 [2024-07-13 06:21:27.741349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.387 [2024-07-13 06:21:27.750776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.387 [2024-07-13 06:21:27.751082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.387 [2024-07-13 06:21:27.751111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.387 [2024-07-13 06:21:27.760115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.387 [2024-07-13 06:21:27.760248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.387 [2024-07-13 06:21:27.760276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.387 [2024-07-13 06:21:27.769420] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.387 [2024-07-13 06:21:27.769621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.387 [2024-07-13 06:21:27.769649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.387 [2024-07-13 06:21:27.778180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.387 [2024-07-13 06:21:27.778390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.387 [2024-07-13 06:21:27.778418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.387 [2024-07-13 06:21:27.787726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.387 [2024-07-13 06:21:27.788053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.387 [2024-07-13 06:21:27.788081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.387 [2024-07-13 06:21:27.797165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.387 [2024-07-13 06:21:27.797491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.387 [2024-07-13 06:21:27.797518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.387 [2024-07-13 06:21:27.806741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.387 [2024-07-13 06:21:27.806942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.387 [2024-07-13 06:21:27.806970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.387 [2024-07-13 06:21:27.816314] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.387 [2024-07-13 06:21:27.816543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.387 [2024-07-13 06:21:27.816571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.387 [2024-07-13 06:21:27.825729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.387 [2024-07-13 06:21:27.825972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.387 [2024-07-13 06:21:27.826000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.387 [2024-07-13 06:21:27.835132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.387 [2024-07-13 06:21:27.835327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.387 [2024-07-13 06:21:27.835355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.387 [2024-07-13 06:21:27.844551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.387 [2024-07-13 06:21:27.844834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.387 [2024-07-13 06:21:27.844862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.387 [2024-07-13 06:21:27.854349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.387 [2024-07-13 06:21:27.854621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.387 [2024-07-13 06:21:27.854649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.387 [2024-07-13 06:21:27.863864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.387 [2024-07-13 06:21:27.864208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.387 [2024-07-13 06:21:27.864235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.387 [2024-07-13 06:21:27.873610] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.387 [2024-07-13 06:21:27.873850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.387 [2024-07-13 06:21:27.873886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.387 [2024-07-13 06:21:27.883026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.387 [2024-07-13 06:21:27.883222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.387 [2024-07-13 06:21:27.883250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.387 [2024-07-13 06:21:27.890674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x6cb210) with pdu=0x2000190fef90 00:26:21.387 [2024-07-13 06:21:27.890849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.387 [2024-07-13 06:21:27.890888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.644 00:26:21.644 Latency(us) 00:26:21.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.644 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:21.644 nvme0n1 : 2.00 3275.51 409.44 0.00 0.00 4873.62 3058.35 10874.12 00:26:21.644 =================================================================================================================== 00:26:21.644 Total : 3275.51 409.44 0.00 0.00 4873.62 3058.35 10874.12 00:26:21.644 0 00:26:21.644 06:21:27 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:21.644 06:21:27 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:21.644 06:21:27 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:21.644 | .driver_specific 00:26:21.644 | .nvme_error 00:26:21.644 | .status_code 00:26:21.644 | .command_transient_transport_error' 00:26:21.644 06:21:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:21.901 06:21:28 -- host/digest.sh@71 -- # (( 211 > 0 )) 00:26:21.901 06:21:28 -- host/digest.sh@73 -- # killprocess 1229291 00:26:21.901 06:21:28 -- common/autotest_common.sh@926 -- # '[' -z 1229291 ']' 00:26:21.901 06:21:28 -- common/autotest_common.sh@930 -- # kill -0 1229291 00:26:21.901 06:21:28 -- common/autotest_common.sh@931 -- # uname 00:26:21.901 06:21:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:21.901 06:21:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1229291 00:26:21.901 06:21:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:21.901 06:21:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:21.901 06:21:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1229291' 00:26:21.901 killing process with pid 1229291 00:26:21.901 06:21:28 -- common/autotest_common.sh@945 -- # kill 1229291 00:26:21.902 Received shutdown signal, test time was about 2.000000 seconds 00:26:21.902 00:26:21.902 Latency(us) 00:26:21.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.902 =================================================================================================================== 00:26:21.902 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:21.902 06:21:28 -- common/autotest_common.sh@950 -- # wait 1229291 00:26:22.159 06:21:28 -- host/digest.sh@115 -- # killprocess 1227498 00:26:22.159 06:21:28 -- common/autotest_common.sh@926 -- # '[' -z 1227498 ']' 00:26:22.159 06:21:28 -- common/autotest_common.sh@930 -- # kill -0 1227498 00:26:22.159 06:21:28 -- common/autotest_common.sh@931 -- # uname 00:26:22.159 06:21:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:22.159 06:21:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1227498 00:26:22.159 06:21:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:22.159 06:21:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:22.159 06:21:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1227498' 00:26:22.159 killing process with pid 1227498 00:26:22.159 06:21:28 -- common/autotest_common.sh@945 -- # kill 1227498 00:26:22.159 06:21:28 -- common/autotest_common.sh@950 -- # wait 1227498 00:26:22.418 00:26:22.418 real 0m18.690s 00:26:22.418 user 0m37.262s 00:26:22.418 sys 0m4.286s 00:26:22.418 06:21:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:22.418 06:21:28 -- common/autotest_common.sh@10 -- # set +x 00:26:22.418 ************************************ 00:26:22.418 END TEST nvmf_digest_error 00:26:22.418 ************************************ 00:26:22.418 06:21:28 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:26:22.418 06:21:28 -- host/digest.sh@139 -- # nvmftestfini 00:26:22.418 06:21:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:22.418 06:21:28 -- nvmf/common.sh@116 -- # sync 00:26:22.418 06:21:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:22.418 06:21:28 -- nvmf/common.sh@119 -- # set +e 00:26:22.418 06:21:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:22.418 06:21:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:22.418 rmmod nvme_tcp 00:26:22.418 rmmod nvme_fabrics 00:26:22.418 rmmod nvme_keyring 00:26:22.418 06:21:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:22.418 06:21:28 -- nvmf/common.sh@123 -- # set -e 00:26:22.418 06:21:28 -- nvmf/common.sh@124 -- # return 0 00:26:22.418 06:21:28 -- nvmf/common.sh@477 -- # '[' -n 1227498 ']' 00:26:22.418 06:21:28 -- nvmf/common.sh@478 -- # killprocess 1227498 00:26:22.418 06:21:28 -- common/autotest_common.sh@926 -- # '[' -z 1227498 ']' 00:26:22.418 06:21:28 -- common/autotest_common.sh@930 -- # kill -0 1227498 00:26:22.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1227498) - No such process 00:26:22.418 06:21:28 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1227498 is not found' 00:26:22.418 Process with pid 1227498 is not found 00:26:22.418 06:21:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:22.418 06:21:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:22.418 06:21:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:22.418 06:21:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:22.418 06:21:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:22.418 06:21:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.418 06:21:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:22.418 06:21:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.953 06:21:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:24.953 00:26:24.953 real 0m38.836s 00:26:24.953 user 1m9.509s 00:26:24.953 sys 0m9.967s 00:26:24.953 06:21:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:24.953 06:21:30 -- common/autotest_common.sh@10 -- # set +x 00:26:24.953 ************************************ 00:26:24.953 END TEST nvmf_digest 00:26:24.953 ************************************ 00:26:24.953 06:21:30 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:26:24.953 06:21:30 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:26:24.953 06:21:30 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:26:24.953 06:21:30 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:24.953 06:21:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:24.953 06:21:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:24.953 06:21:30 -- common/autotest_common.sh@10 -- # set +x 00:26:24.953 ************************************ 00:26:24.953 START TEST nvmf_bdevperf 00:26:24.953 ************************************ 00:26:24.953 06:21:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:24.953 * Looking for test storage... 00:26:24.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:24.953 06:21:30 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:24.953 06:21:30 -- nvmf/common.sh@7 -- # uname -s 00:26:24.953 06:21:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.953 06:21:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.953 06:21:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.953 06:21:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.953 06:21:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.953 06:21:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.953 06:21:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.953 06:21:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.953 06:21:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.953 06:21:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.953 06:21:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:24.953 06:21:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:24.953 06:21:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.953 06:21:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.953 06:21:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:24.953 06:21:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:24.953 06:21:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.953 06:21:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.953 06:21:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.953 06:21:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.953 06:21:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.953 06:21:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.953 06:21:30 -- paths/export.sh@5 -- # export PATH 00:26:24.953 06:21:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.953 06:21:30 -- nvmf/common.sh@46 -- # : 0 00:26:24.953 06:21:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:24.953 06:21:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:24.953 06:21:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:24.953 06:21:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.953 06:21:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.953 06:21:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:24.953 06:21:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:24.953 06:21:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:24.953 06:21:30 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:24.953 06:21:30 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:24.953 06:21:30 -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:24.953 06:21:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:24.953 06:21:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.953 06:21:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:24.953 06:21:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:24.953 06:21:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:24.953 06:21:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.953 06:21:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:24.953 06:21:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.953 06:21:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:24.953 06:21:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:24.953 06:21:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:24.953 06:21:30 -- common/autotest_common.sh@10 -- # set +x 00:26:26.855 06:21:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:26.855 06:21:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:26.855 06:21:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:26.855 06:21:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:26.855 06:21:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:26.855 06:21:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:26.855 06:21:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:26.855 06:21:32 -- nvmf/common.sh@294 -- # net_devs=() 00:26:26.855 06:21:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:26.855 06:21:32 -- nvmf/common.sh@295 -- # e810=() 00:26:26.855 06:21:32 -- nvmf/common.sh@295 -- # local -ga e810 00:26:26.855 06:21:32 -- nvmf/common.sh@296 -- # x722=() 00:26:26.855 06:21:32 -- nvmf/common.sh@296 -- # local -ga x722 00:26:26.855 06:21:32 -- nvmf/common.sh@297 -- # mlx=() 00:26:26.855 06:21:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:26.855 06:21:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.855 06:21:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.855 06:21:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.855 06:21:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.855 06:21:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.855 06:21:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.855 06:21:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.855 06:21:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.855 06:21:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.855 06:21:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.855 06:21:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.855 06:21:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:26.855 06:21:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:26.855 06:21:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:26.855 06:21:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:26.855 06:21:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:26.855 06:21:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:26.855 06:21:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:26.855 06:21:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:26.855 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:26.855 06:21:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:26.855 06:21:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:26.855 06:21:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.855 06:21:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.855 06:21:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:26.855 06:21:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:26.856 06:21:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:26.856 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:26.856 06:21:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:26.856 06:21:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:26.856 06:21:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.856 06:21:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.856 06:21:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:26.856 06:21:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:26.856 06:21:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:26.856 06:21:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:26.856 06:21:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:26.856 06:21:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.856 06:21:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:26.856 06:21:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.856 06:21:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:26.856 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:26.856 06:21:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.856 06:21:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:26.856 06:21:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.856 06:21:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:26.856 06:21:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.856 06:21:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:26.856 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:26.856 06:21:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.856 06:21:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:26.856 06:21:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:26.856 06:21:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:26.856 06:21:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:26.856 06:21:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:26.856 06:21:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:26.856 06:21:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.856 06:21:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:26.856 06:21:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:26.856 06:21:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:26.856 06:21:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:26.856 06:21:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:26.856 06:21:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:26.856 06:21:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.856 06:21:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:26.856 06:21:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:26.856 06:21:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:26.856 06:21:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:26.856 06:21:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:26.856 06:21:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:26.856 06:21:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:26.856 06:21:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:26.856 06:21:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:26.856 06:21:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:26.856 06:21:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:26.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:26:26.856 00:26:26.856 --- 10.0.0.2 ping statistics --- 00:26:26.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.856 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:26:26.856 06:21:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:26.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:26:26.856 00:26:26.856 --- 10.0.0.1 ping statistics --- 00:26:26.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.856 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:26:26.856 06:21:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.856 06:21:33 -- nvmf/common.sh@410 -- # return 0 00:26:26.856 06:21:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:26.856 06:21:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:26.856 06:21:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:26.856 06:21:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:26.856 06:21:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:26.856 06:21:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:26.856 06:21:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:26.856 06:21:33 -- host/bdevperf.sh@25 -- # tgt_init 00:26:26.856 06:21:33 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:26.856 06:21:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:26.856 06:21:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:26.856 06:21:33 -- common/autotest_common.sh@10 -- # set +x 00:26:26.856 06:21:33 -- nvmf/common.sh@469 -- # nvmfpid=1231799 00:26:26.856 06:21:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:26.856 06:21:33 -- nvmf/common.sh@470 -- # waitforlisten 1231799 00:26:26.856 06:21:33 -- common/autotest_common.sh@819 -- # '[' -z 1231799 ']' 00:26:26.856 06:21:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.856 06:21:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:26.856 06:21:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.856 06:21:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:26.856 06:21:33 -- common/autotest_common.sh@10 -- # set +x 00:26:26.856 [2024-07-13 06:21:33.089266] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:26.856 [2024-07-13 06:21:33.089357] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.856 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.856 [2024-07-13 06:21:33.152533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:26.856 [2024-07-13 06:21:33.258357] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:26.856 [2024-07-13 06:21:33.258517] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.856 [2024-07-13 06:21:33.258533] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.856 [2024-07-13 06:21:33.258545] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.856 [2024-07-13 06:21:33.258638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.856 [2024-07-13 06:21:33.258684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:26.856 [2024-07-13 06:21:33.258686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.789 06:21:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:27.789 06:21:34 -- common/autotest_common.sh@852 -- # return 0 00:26:27.789 06:21:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:27.789 06:21:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:27.789 06:21:34 -- common/autotest_common.sh@10 -- # set +x 00:26:27.789 06:21:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:27.789 06:21:34 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:27.789 06:21:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.789 06:21:34 -- common/autotest_common.sh@10 -- # set +x 00:26:27.789 [2024-07-13 06:21:34.032272] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:27.789 06:21:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.789 06:21:34 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:27.789 06:21:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.789 06:21:34 -- common/autotest_common.sh@10 -- # set +x 00:26:27.789 Malloc0 00:26:27.789 06:21:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.789 06:21:34 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:27.789 06:21:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.789 06:21:34 -- common/autotest_common.sh@10 -- # set +x 00:26:27.789 06:21:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.789 06:21:34 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:27.789 06:21:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.789 06:21:34 -- common/autotest_common.sh@10 -- # set +x 00:26:27.789 06:21:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.789 06:21:34 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:27.789 06:21:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.789 06:21:34 -- common/autotest_common.sh@10 -- # set +x 00:26:27.789 [2024-07-13 06:21:34.100553] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:27.789 06:21:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.789 06:21:34 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:27.789 06:21:34 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:27.789 06:21:34 -- nvmf/common.sh@520 -- # config=() 00:26:27.789 06:21:34 -- nvmf/common.sh@520 -- # local subsystem config 00:26:27.789 06:21:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:27.789 06:21:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:27.789 { 00:26:27.789 "params": { 00:26:27.789 "name": "Nvme$subsystem", 00:26:27.789 "trtype": "$TEST_TRANSPORT", 00:26:27.789 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:27.789 "adrfam": "ipv4", 00:26:27.789 "trsvcid": "$NVMF_PORT", 00:26:27.789 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:27.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:27.789 "hdgst": ${hdgst:-false}, 00:26:27.789 "ddgst": ${ddgst:-false} 00:26:27.789 }, 00:26:27.789 "method": "bdev_nvme_attach_controller" 00:26:27.789 } 00:26:27.789 EOF 00:26:27.789 )") 00:26:27.789 06:21:34 -- nvmf/common.sh@542 -- # cat 00:26:27.789 06:21:34 -- nvmf/common.sh@544 -- # jq . 00:26:27.789 06:21:34 -- nvmf/common.sh@545 -- # IFS=, 00:26:27.789 06:21:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:27.789 "params": { 00:26:27.789 "name": "Nvme1", 00:26:27.789 "trtype": "tcp", 00:26:27.789 "traddr": "10.0.0.2", 00:26:27.789 "adrfam": "ipv4", 00:26:27.789 "trsvcid": "4420", 00:26:27.789 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:27.789 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:27.789 "hdgst": false, 00:26:27.789 "ddgst": false 00:26:27.789 }, 00:26:27.789 "method": "bdev_nvme_attach_controller" 00:26:27.789 }' 00:26:27.789 [2024-07-13 06:21:34.143250] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:27.789 [2024-07-13 06:21:34.143330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231961 ] 00:26:27.789 EAL: No free 2048 kB hugepages reported on node 1 00:26:27.789 [2024-07-13 06:21:34.202706] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.048 [2024-07-13 06:21:34.314228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.048 Running I/O for 1 seconds... 00:26:29.420 00:26:29.420 Latency(us) 00:26:29.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.420 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:29.420 Verification LBA range: start 0x0 length 0x4000 00:26:29.420 Nvme1n1 : 1.01 12970.49 50.67 0.00 0.00 9820.99 1462.42 17087.91 00:26:29.420 =================================================================================================================== 00:26:29.420 Total : 12970.49 50.67 0.00 0.00 9820.99 1462.42 17087.91 00:26:29.420 06:21:35 -- host/bdevperf.sh@30 -- # bdevperfpid=1232101 00:26:29.420 06:21:35 -- host/bdevperf.sh@32 -- # sleep 3 00:26:29.420 06:21:35 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:29.420 06:21:35 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:29.420 06:21:35 -- nvmf/common.sh@520 -- # config=() 00:26:29.420 06:21:35 -- nvmf/common.sh@520 -- # local subsystem config 00:26:29.420 06:21:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:29.420 06:21:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:29.420 { 00:26:29.420 "params": { 00:26:29.420 "name": "Nvme$subsystem", 00:26:29.420 "trtype": "$TEST_TRANSPORT", 00:26:29.420 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.420 "adrfam": "ipv4", 00:26:29.420 "trsvcid": "$NVMF_PORT", 00:26:29.420 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.420 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.420 "hdgst": ${hdgst:-false}, 00:26:29.420 "ddgst": ${ddgst:-false} 00:26:29.420 }, 00:26:29.420 "method": "bdev_nvme_attach_controller" 00:26:29.420 } 00:26:29.420 EOF 00:26:29.420 )") 00:26:29.420 06:21:35 -- nvmf/common.sh@542 -- # cat 00:26:29.420 06:21:35 -- nvmf/common.sh@544 -- # jq . 00:26:29.420 06:21:35 -- nvmf/common.sh@545 -- # IFS=, 00:26:29.420 06:21:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:29.420 "params": { 00:26:29.420 "name": "Nvme1", 00:26:29.420 "trtype": "tcp", 00:26:29.420 "traddr": "10.0.0.2", 00:26:29.420 "adrfam": "ipv4", 00:26:29.420 "trsvcid": "4420", 00:26:29.420 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:29.420 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:29.420 "hdgst": false, 00:26:29.420 "ddgst": false 00:26:29.420 }, 00:26:29.420 "method": "bdev_nvme_attach_controller" 00:26:29.420 }' 00:26:29.420 [2024-07-13 06:21:35.787188] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:29.420 [2024-07-13 06:21:35.787269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232101 ] 00:26:29.420 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.420 [2024-07-13 06:21:35.848249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.677 [2024-07-13 06:21:35.957022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:29.677 Running I/O for 15 seconds... 00:26:32.968 06:21:38 -- host/bdevperf.sh@33 -- # kill -9 1231799 00:26:32.968 06:21:38 -- host/bdevperf.sh@35 -- # sleep 3 00:26:32.968 [2024-07-13 06:21:38.763275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.763329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.763362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.763382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.763403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.763421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.763441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.763459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.763479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.763496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.763516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.763534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.763553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.763572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.763592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.763611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.763630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.763649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.763670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.763697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.763718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.763737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.763759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.763778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.763796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.763812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.763830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.763846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.763863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.763888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.763922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.763937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.763953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.763968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.763985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.764000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.764032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.764064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.764095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.764126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.764180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.764214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.764247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.764281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.764314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.764347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.764381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.764416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.764450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.764483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.764518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.968 [2024-07-13 06:21:38.764552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.968 [2024-07-13 06:21:38.764589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.968 [2024-07-13 06:21:38.764624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.968 [2024-07-13 06:21:38.764642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.969 [2024-07-13 06:21:38.764658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.764675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.969 [2024-07-13 06:21:38.764691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.764709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.764725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.764742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.764758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.764776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.764792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.764809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.764825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.764842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.764858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.764883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.764915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.764931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.764945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.764961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.764976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.764992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.969 [2024-07-13 06:21:38.765132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.969 [2024-07-13 06:21:38.765182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.969 [2024-07-13 06:21:38.765216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.969 [2024-07-13 06:21:38.765251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.969 [2024-07-13 06:21:38.765319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.969 [2024-07-13 06:21:38.765421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.969 [2024-07-13 06:21:38.765494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.969 [2024-07-13 06:21:38.765529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.969 [2024-07-13 06:21:38.765878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.765978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.765997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.766014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.766028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.766043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.766057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.766072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.766086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.766101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.969 [2024-07-13 06:21:38.766116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.969 [2024-07-13 06:21:38.766131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.970 [2024-07-13 06:21:38.766228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.970 [2024-07-13 06:21:38.766295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.970 [2024-07-13 06:21:38.766365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.970 [2024-07-13 06:21:38.766398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.970 [2024-07-13 06:21:38.766431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.970 [2024-07-13 06:21:38.766532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.766975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.766989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.767020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.970 [2024-07-13 06:21:38.767049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.970 [2024-07-13 06:21:38.767078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.970 [2024-07-13 06:21:38.767108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.970 [2024-07-13 06:21:38.767137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.970 [2024-07-13 06:21:38.767187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.767226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.767260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.970 [2024-07-13 06:21:38.767293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.767327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.970 [2024-07-13 06:21:38.767361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.970 [2024-07-13 06:21:38.767395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.970 [2024-07-13 06:21:38.767428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.767461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.767494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:32.970 [2024-07-13 06:21:38.767528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.767564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.970 [2024-07-13 06:21:38.767598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.970 [2024-07-13 06:21:38.767616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.971 [2024-07-13 06:21:38.767633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.971 [2024-07-13 06:21:38.767654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.971 [2024-07-13 06:21:38.767671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.971 [2024-07-13 06:21:38.767689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.971 [2024-07-13 06:21:38.767705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.971 [2024-07-13 06:21:38.767723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.971 [2024-07-13 06:21:38.767739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.971 [2024-07-13 06:21:38.767757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.971 [2024-07-13 06:21:38.767773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.971 [2024-07-13 06:21:38.767789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef13a0 is same with the state(5) to be set 00:26:32.971 [2024-07-13 06:21:38.767808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:32.971 [2024-07-13 06:21:38.767820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:32.971 [2024-07-13 06:21:38.767834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2992 len:8 PRP1 0x0 PRP2 0x0 00:26:32.971 [2024-07-13 06:21:38.767848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:32.971 [2024-07-13 06:21:38.767934] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xef13a0 was disconnected and freed. reset controller. 00:26:32.971 [2024-07-13 06:21:38.770517] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.971 [2024-07-13 06:21:38.770592] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.971 [2024-07-13 06:21:38.771201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.971 [2024-07-13 06:21:38.771378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.971 [2024-07-13 06:21:38.771407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.971 [2024-07-13 06:21:38.771425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.971 [2024-07-13 06:21:38.771580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.971 [2024-07-13 06:21:38.771779] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.971 [2024-07-13 06:21:38.771803] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.971 [2024-07-13 06:21:38.771822] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.971 [2024-07-13 06:21:38.774269] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.971 [2024-07-13 06:21:38.783695] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.971 [2024-07-13 06:21:38.784049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.971 [2024-07-13 06:21:38.784208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.971 [2024-07-13 06:21:38.784237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.971 [2024-07-13 06:21:38.784256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.971 [2024-07-13 06:21:38.784378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.971 [2024-07-13 06:21:38.784564] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.971 [2024-07-13 06:21:38.784590] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.971 [2024-07-13 06:21:38.784606] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.971 [2024-07-13 06:21:38.787084] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.971 [2024-07-13 06:21:38.796425] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.971 [2024-07-13 06:21:38.796781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.971 [2024-07-13 06:21:38.796985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.971 [2024-07-13 06:21:38.797015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.971 [2024-07-13 06:21:38.797033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.971 [2024-07-13 06:21:38.797191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.971 [2024-07-13 06:21:38.797345] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.971 [2024-07-13 06:21:38.797370] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.971 [2024-07-13 06:21:38.797387] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.971 [2024-07-13 06:21:38.800009] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.971 [2024-07-13 06:21:38.809200] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.971 [2024-07-13 06:21:38.809565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.971 [2024-07-13 06:21:38.809737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.971 [2024-07-13 06:21:38.809766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.971 [2024-07-13 06:21:38.809784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.971 [2024-07-13 06:21:38.809973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.971 [2024-07-13 06:21:38.810236] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.971 [2024-07-13 06:21:38.810262] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.971 [2024-07-13 06:21:38.810278] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.971 [2024-07-13 06:21:38.812893] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.971 [2024-07-13 06:21:38.822068] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.971 [2024-07-13 06:21:38.822462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.971 [2024-07-13 06:21:38.822716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.971 [2024-07-13 06:21:38.822772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.971 [2024-07-13 06:21:38.822791] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.971 [2024-07-13 06:21:38.823001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.971 [2024-07-13 06:21:38.823207] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.971 [2024-07-13 06:21:38.823233] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.971 [2024-07-13 06:21:38.823250] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.971 [2024-07-13 06:21:38.825723] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.971 [2024-07-13 06:21:38.834781] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.971 [2024-07-13 06:21:38.835126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.971 [2024-07-13 06:21:38.835347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.971 [2024-07-13 06:21:38.835400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.971 [2024-07-13 06:21:38.835419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.971 [2024-07-13 06:21:38.835585] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.971 [2024-07-13 06:21:38.835702] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.971 [2024-07-13 06:21:38.835728] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.971 [2024-07-13 06:21:38.835744] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.971 [2024-07-13 06:21:38.838145] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.971 [2024-07-13 06:21:38.847414] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.971 [2024-07-13 06:21:38.847770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.971 [2024-07-13 06:21:38.847938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.971 [2024-07-13 06:21:38.847969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.971 [2024-07-13 06:21:38.847987] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.971 [2024-07-13 06:21:38.848187] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.971 [2024-07-13 06:21:38.848397] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.971 [2024-07-13 06:21:38.848423] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.971 [2024-07-13 06:21:38.848440] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.971 [2024-07-13 06:21:38.850821] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.971 [2024-07-13 06:21:38.859979] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.971 [2024-07-13 06:21:38.860493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.971 [2024-07-13 06:21:38.860798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.971 [2024-07-13 06:21:38.860851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.971 [2024-07-13 06:21:38.860878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.971 [2024-07-13 06:21:38.861039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.971 [2024-07-13 06:21:38.861161] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.971 [2024-07-13 06:21:38.861191] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.971 [2024-07-13 06:21:38.861208] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.971 [2024-07-13 06:21:38.863922] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.971 [2024-07-13 06:21:38.872601] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.972 [2024-07-13 06:21:38.872976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.972 [2024-07-13 06:21:38.873143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.972 [2024-07-13 06:21:38.873173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.972 [2024-07-13 06:21:38.873191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.972 [2024-07-13 06:21:38.873395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.972 [2024-07-13 06:21:38.873581] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.972 [2024-07-13 06:21:38.873607] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.972 [2024-07-13 06:21:38.873624] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.972 [2024-07-13 06:21:38.876039] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.972 [2024-07-13 06:21:38.885456] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.972 [2024-07-13 06:21:38.885813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.972 [2024-07-13 06:21:38.885980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.972 [2024-07-13 06:21:38.886008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.972 [2024-07-13 06:21:38.886025] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.972 [2024-07-13 06:21:38.886190] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.972 [2024-07-13 06:21:38.886328] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.972 [2024-07-13 06:21:38.886354] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.972 [2024-07-13 06:21:38.886371] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.972 [2024-07-13 06:21:38.888852] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.972 [2024-07-13 06:21:38.898224] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.972 [2024-07-13 06:21:38.898538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.972 [2024-07-13 06:21:38.898768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.972 [2024-07-13 06:21:38.898820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.972 [2024-07-13 06:21:38.898838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.972 [2024-07-13 06:21:38.899107] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.972 [2024-07-13 06:21:38.899276] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.972 [2024-07-13 06:21:38.899302] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.972 [2024-07-13 06:21:38.899324] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.972 [2024-07-13 06:21:38.901857] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.972 [2024-07-13 06:21:38.910898] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.972 [2024-07-13 06:21:38.911276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.972 [2024-07-13 06:21:38.911482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.972 [2024-07-13 06:21:38.911511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.972 [2024-07-13 06:21:38.911529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.972 [2024-07-13 06:21:38.911682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.972 [2024-07-13 06:21:38.911879] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.972 [2024-07-13 06:21:38.911905] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.972 [2024-07-13 06:21:38.911922] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.972 [2024-07-13 06:21:38.914457] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.972 [2024-07-13 06:21:38.923501] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.972 [2024-07-13 06:21:38.923906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.972 [2024-07-13 06:21:38.924068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.972 [2024-07-13 06:21:38.924094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.972 [2024-07-13 06:21:38.924110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.972 [2024-07-13 06:21:38.924286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.972 [2024-07-13 06:21:38.924445] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.972 [2024-07-13 06:21:38.924471] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.972 [2024-07-13 06:21:38.924488] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.972 [2024-07-13 06:21:38.926961] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.972 [2024-07-13 06:21:38.936301] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.972 [2024-07-13 06:21:38.936641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.972 [2024-07-13 06:21:38.936806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.972 [2024-07-13 06:21:38.936835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.972 [2024-07-13 06:21:38.936853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.972 [2024-07-13 06:21:38.937029] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.972 [2024-07-13 06:21:38.937205] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.972 [2024-07-13 06:21:38.937231] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.972 [2024-07-13 06:21:38.937248] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.972 [2024-07-13 06:21:38.939828] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.972 [2024-07-13 06:21:38.949036] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.972 [2024-07-13 06:21:38.949372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.972 [2024-07-13 06:21:38.949542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.972 [2024-07-13 06:21:38.949571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.972 [2024-07-13 06:21:38.949590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.972 [2024-07-13 06:21:38.949761] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.972 [2024-07-13 06:21:38.949929] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.972 [2024-07-13 06:21:38.949955] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.972 [2024-07-13 06:21:38.949972] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.972 [2024-07-13 06:21:38.952671] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.972 [2024-07-13 06:21:38.961649] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.972 [2024-07-13 06:21:38.962109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.972 [2024-07-13 06:21:38.962330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.972 [2024-07-13 06:21:38.962359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.972 [2024-07-13 06:21:38.962378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.972 [2024-07-13 06:21:38.962566] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.972 [2024-07-13 06:21:38.962771] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.972 [2024-07-13 06:21:38.962797] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.972 [2024-07-13 06:21:38.962813] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.972 [2024-07-13 06:21:38.965394] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.972 [2024-07-13 06:21:38.974243] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.972 [2024-07-13 06:21:38.974607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.972 [2024-07-13 06:21:38.974741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.972 [2024-07-13 06:21:38.974770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.973 [2024-07-13 06:21:38.974788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.973 [2024-07-13 06:21:38.974957] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.973 [2024-07-13 06:21:38.975152] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.973 [2024-07-13 06:21:38.975178] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.973 [2024-07-13 06:21:38.975195] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.973 [2024-07-13 06:21:38.977614] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.973 [2024-07-13 06:21:38.987096] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.973 [2024-07-13 06:21:38.987448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:38.987558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:38.987584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.973 [2024-07-13 06:21:38.987600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.973 [2024-07-13 06:21:38.987697] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.973 [2024-07-13 06:21:38.987926] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.973 [2024-07-13 06:21:38.987952] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.973 [2024-07-13 06:21:38.987968] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.973 [2024-07-13 06:21:38.990668] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.973 [2024-07-13 06:21:38.999770] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.973 [2024-07-13 06:21:39.000143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:39.000308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:39.000337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.973 [2024-07-13 06:21:39.000356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.973 [2024-07-13 06:21:39.000526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.973 [2024-07-13 06:21:39.000698] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.973 [2024-07-13 06:21:39.000723] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.973 [2024-07-13 06:21:39.000740] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.973 [2024-07-13 06:21:39.003456] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.973 [2024-07-13 06:21:39.012629] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.973 [2024-07-13 06:21:39.012968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:39.013137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:39.013166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.973 [2024-07-13 06:21:39.013184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.973 [2024-07-13 06:21:39.013320] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.973 [2024-07-13 06:21:39.013511] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.973 [2024-07-13 06:21:39.013537] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.973 [2024-07-13 06:21:39.013553] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.973 [2024-07-13 06:21:39.015945] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.973 [2024-07-13 06:21:39.025662] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.973 [2024-07-13 06:21:39.026016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:39.026166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:39.026193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.973 [2024-07-13 06:21:39.026209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.973 [2024-07-13 06:21:39.026425] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.973 [2024-07-13 06:21:39.026580] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.973 [2024-07-13 06:21:39.026601] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.973 [2024-07-13 06:21:39.026614] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.973 [2024-07-13 06:21:39.029034] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.973 [2024-07-13 06:21:39.038451] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.973 [2024-07-13 06:21:39.038864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:39.039026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:39.039055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.973 [2024-07-13 06:21:39.039073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.973 [2024-07-13 06:21:39.039262] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.973 [2024-07-13 06:21:39.039421] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.973 [2024-07-13 06:21:39.039446] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.973 [2024-07-13 06:21:39.039463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.973 [2024-07-13 06:21:39.041935] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.973 [2024-07-13 06:21:39.051248] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.973 [2024-07-13 06:21:39.051647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:39.051797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:39.051824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.973 [2024-07-13 06:21:39.051840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.973 [2024-07-13 06:21:39.052027] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.973 [2024-07-13 06:21:39.052216] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.973 [2024-07-13 06:21:39.052242] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.973 [2024-07-13 06:21:39.052258] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.973 [2024-07-13 06:21:39.054566] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.973 [2024-07-13 06:21:39.063985] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.973 [2024-07-13 06:21:39.064322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:39.064529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:39.064560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.973 [2024-07-13 06:21:39.064578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.973 [2024-07-13 06:21:39.064813] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.973 [2024-07-13 06:21:39.064966] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.973 [2024-07-13 06:21:39.064993] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.973 [2024-07-13 06:21:39.065010] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.973 [2024-07-13 06:21:39.067448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.973 [2024-07-13 06:21:39.076728] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.973 [2024-07-13 06:21:39.077046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:39.077244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:39.077290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.973 [2024-07-13 06:21:39.077308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.973 [2024-07-13 06:21:39.077526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.973 [2024-07-13 06:21:39.077697] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.973 [2024-07-13 06:21:39.077723] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.973 [2024-07-13 06:21:39.077739] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.973 [2024-07-13 06:21:39.080244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.973 [2024-07-13 06:21:39.089478] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.973 [2024-07-13 06:21:39.089819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:39.089982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:39.090010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.973 [2024-07-13 06:21:39.090026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.973 [2024-07-13 06:21:39.090216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.973 [2024-07-13 06:21:39.090390] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.973 [2024-07-13 06:21:39.090415] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.973 [2024-07-13 06:21:39.090431] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.973 [2024-07-13 06:21:39.093007] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.973 [2024-07-13 06:21:39.102090] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.973 [2024-07-13 06:21:39.102489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:39.102679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.973 [2024-07-13 06:21:39.102708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.973 [2024-07-13 06:21:39.102731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.974 [2024-07-13 06:21:39.102891] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.974 [2024-07-13 06:21:39.103078] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.974 [2024-07-13 06:21:39.103104] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.974 [2024-07-13 06:21:39.103121] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.974 [2024-07-13 06:21:39.105672] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.974 [2024-07-13 06:21:39.114885] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.974 [2024-07-13 06:21:39.115234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.115397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.115443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.974 [2024-07-13 06:21:39.115461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.974 [2024-07-13 06:21:39.115614] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.974 [2024-07-13 06:21:39.115814] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.974 [2024-07-13 06:21:39.115839] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.974 [2024-07-13 06:21:39.115856] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.974 [2024-07-13 06:21:39.118250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.974 [2024-07-13 06:21:39.127569] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.974 [2024-07-13 06:21:39.127930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.128169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.128218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.974 [2024-07-13 06:21:39.128237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.974 [2024-07-13 06:21:39.128390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.974 [2024-07-13 06:21:39.128576] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.974 [2024-07-13 06:21:39.128602] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.974 [2024-07-13 06:21:39.128618] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.974 [2024-07-13 06:21:39.131241] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.974 [2024-07-13 06:21:39.140523] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.974 [2024-07-13 06:21:39.140973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.141217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.141246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.974 [2024-07-13 06:21:39.141264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.974 [2024-07-13 06:21:39.141432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.974 [2024-07-13 06:21:39.141649] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.974 [2024-07-13 06:21:39.141675] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.974 [2024-07-13 06:21:39.141692] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.974 [2024-07-13 06:21:39.144181] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.974 [2024-07-13 06:21:39.153204] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.974 [2024-07-13 06:21:39.153597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.153758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.153784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.974 [2024-07-13 06:21:39.153816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.974 [2024-07-13 06:21:39.153983] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.974 [2024-07-13 06:21:39.154170] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.974 [2024-07-13 06:21:39.154196] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.974 [2024-07-13 06:21:39.154213] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.974 [2024-07-13 06:21:39.156779] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.974 [2024-07-13 06:21:39.166064] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.974 [2024-07-13 06:21:39.166471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.166783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.166842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.974 [2024-07-13 06:21:39.166860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.974 [2024-07-13 06:21:39.167052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.974 [2024-07-13 06:21:39.167228] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.974 [2024-07-13 06:21:39.167254] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.974 [2024-07-13 06:21:39.167270] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.974 [2024-07-13 06:21:39.169934] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.974 [2024-07-13 06:21:39.178796] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.974 [2024-07-13 06:21:39.179144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.179459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.179520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.974 [2024-07-13 06:21:39.179538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.974 [2024-07-13 06:21:39.179673] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.974 [2024-07-13 06:21:39.179817] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.974 [2024-07-13 06:21:39.179843] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.974 [2024-07-13 06:21:39.179859] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.974 [2024-07-13 06:21:39.182372] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.974 [2024-07-13 06:21:39.191609] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.974 [2024-07-13 06:21:39.192023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.192269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.192321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.974 [2024-07-13 06:21:39.192340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.974 [2024-07-13 06:21:39.192498] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.974 [2024-07-13 06:21:39.192656] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.974 [2024-07-13 06:21:39.192682] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.974 [2024-07-13 06:21:39.192698] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.974 [2024-07-13 06:21:39.195252] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.974 [2024-07-13 06:21:39.204450] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.974 [2024-07-13 06:21:39.204816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.204976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.205003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.974 [2024-07-13 06:21:39.205020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.974 [2024-07-13 06:21:39.205214] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.974 [2024-07-13 06:21:39.205416] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.974 [2024-07-13 06:21:39.205441] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.974 [2024-07-13 06:21:39.205458] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.974 [2024-07-13 06:21:39.207963] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.974 [2024-07-13 06:21:39.217140] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.974 [2024-07-13 06:21:39.217536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.217722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.217751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.974 [2024-07-13 06:21:39.217769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.974 [2024-07-13 06:21:39.217966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.974 [2024-07-13 06:21:39.218102] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.974 [2024-07-13 06:21:39.218132] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.974 [2024-07-13 06:21:39.218150] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.974 [2024-07-13 06:21:39.220690] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.974 [2024-07-13 06:21:39.229962] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.974 [2024-07-13 06:21:39.230321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.230583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.974 [2024-07-13 06:21:39.230623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.974 [2024-07-13 06:21:39.230639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.974 [2024-07-13 06:21:39.230841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.975 [2024-07-13 06:21:39.231007] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.975 [2024-07-13 06:21:39.231032] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.975 [2024-07-13 06:21:39.231048] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.975 [2024-07-13 06:21:39.233497] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.975 [2024-07-13 06:21:39.242626] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.975 [2024-07-13 06:21:39.242956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.243093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.243120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.975 [2024-07-13 06:21:39.243153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.975 [2024-07-13 06:21:39.243357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.975 [2024-07-13 06:21:39.243510] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.975 [2024-07-13 06:21:39.243536] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.975 [2024-07-13 06:21:39.243552] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.975 [2024-07-13 06:21:39.245777] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.975 [2024-07-13 06:21:39.255466] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.975 [2024-07-13 06:21:39.255838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.256018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.256045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.975 [2024-07-13 06:21:39.256061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.975 [2024-07-13 06:21:39.256214] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.975 [2024-07-13 06:21:39.256383] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.975 [2024-07-13 06:21:39.256408] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.975 [2024-07-13 06:21:39.256430] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.975 [2024-07-13 06:21:39.258850] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.975 [2024-07-13 06:21:39.267999] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.975 [2024-07-13 06:21:39.268386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.268609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.268635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.975 [2024-07-13 06:21:39.268652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.975 [2024-07-13 06:21:39.268826] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.975 [2024-07-13 06:21:39.269022] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.975 [2024-07-13 06:21:39.269043] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.975 [2024-07-13 06:21:39.269057] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.975 [2024-07-13 06:21:39.271551] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.975 [2024-07-13 06:21:39.280855] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.975 [2024-07-13 06:21:39.281319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.281496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.281543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.975 [2024-07-13 06:21:39.281561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.975 [2024-07-13 06:21:39.281714] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.975 [2024-07-13 06:21:39.281945] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.975 [2024-07-13 06:21:39.281971] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.975 [2024-07-13 06:21:39.281988] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.975 [2024-07-13 06:21:39.284558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.975 [2024-07-13 06:21:39.293661] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.975 [2024-07-13 06:21:39.294000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.294170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.294228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.975 [2024-07-13 06:21:39.294246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.975 [2024-07-13 06:21:39.294404] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.975 [2024-07-13 06:21:39.294602] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.975 [2024-07-13 06:21:39.294628] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.975 [2024-07-13 06:21:39.294645] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.975 [2024-07-13 06:21:39.297005] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.975 [2024-07-13 06:21:39.306363] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.975 [2024-07-13 06:21:39.306699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.306878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.306905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.975 [2024-07-13 06:21:39.306921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.975 [2024-07-13 06:21:39.307062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.975 [2024-07-13 06:21:39.307236] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.975 [2024-07-13 06:21:39.307262] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.975 [2024-07-13 06:21:39.307279] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.975 [2024-07-13 06:21:39.309810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.975 [2024-07-13 06:21:39.319136] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.975 [2024-07-13 06:21:39.319493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.319781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.319830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.975 [2024-07-13 06:21:39.319849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.975 [2024-07-13 06:21:39.319968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.975 [2024-07-13 06:21:39.320155] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.975 [2024-07-13 06:21:39.320181] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.975 [2024-07-13 06:21:39.320197] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.975 [2024-07-13 06:21:39.322853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.975 [2024-07-13 06:21:39.331758] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.975 [2024-07-13 06:21:39.332184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.332386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.332415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.975 [2024-07-13 06:21:39.332433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.975 [2024-07-13 06:21:39.332613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.975 [2024-07-13 06:21:39.332836] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.975 [2024-07-13 06:21:39.332861] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.975 [2024-07-13 06:21:39.332890] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.975 [2024-07-13 06:21:39.335408] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.975 [2024-07-13 06:21:39.344579] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.975 [2024-07-13 06:21:39.344937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.345127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.345156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.975 [2024-07-13 06:21:39.345174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.975 [2024-07-13 06:21:39.345345] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.975 [2024-07-13 06:21:39.345532] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.975 [2024-07-13 06:21:39.345557] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.975 [2024-07-13 06:21:39.345574] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.975 [2024-07-13 06:21:39.348035] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.975 [2024-07-13 06:21:39.357321] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.975 [2024-07-13 06:21:39.357681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.357849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.975 [2024-07-13 06:21:39.357887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.975 [2024-07-13 06:21:39.357907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.975 [2024-07-13 06:21:39.358065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.975 [2024-07-13 06:21:39.358228] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.976 [2024-07-13 06:21:39.358253] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.976 [2024-07-13 06:21:39.358270] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.976 [2024-07-13 06:21:39.360976] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.976 [2024-07-13 06:21:39.370135] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.976 [2024-07-13 06:21:39.370500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.976 [2024-07-13 06:21:39.370725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.976 [2024-07-13 06:21:39.370777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.976 [2024-07-13 06:21:39.370795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.976 [2024-07-13 06:21:39.370978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.976 [2024-07-13 06:21:39.371173] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.976 [2024-07-13 06:21:39.371199] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.976 [2024-07-13 06:21:39.371216] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.976 [2024-07-13 06:21:39.373719] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.976 [2024-07-13 06:21:39.382969] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.976 [2024-07-13 06:21:39.383414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.976 [2024-07-13 06:21:39.383624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.976 [2024-07-13 06:21:39.383653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.976 [2024-07-13 06:21:39.383671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.976 [2024-07-13 06:21:39.383924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.976 [2024-07-13 06:21:39.384093] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.976 [2024-07-13 06:21:39.384118] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.976 [2024-07-13 06:21:39.384135] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.976 [2024-07-13 06:21:39.386495] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.976 [2024-07-13 06:21:39.395829] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.976 [2024-07-13 06:21:39.396248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.976 [2024-07-13 06:21:39.396382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.976 [2024-07-13 06:21:39.396411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.976 [2024-07-13 06:21:39.396429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.976 [2024-07-13 06:21:39.396586] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.976 [2024-07-13 06:21:39.396744] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.976 [2024-07-13 06:21:39.396769] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.976 [2024-07-13 06:21:39.396786] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.976 [2024-07-13 06:21:39.399258] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.976 [2024-07-13 06:21:39.408621] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.976 [2024-07-13 06:21:39.408999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.976 [2024-07-13 06:21:39.409130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.976 [2024-07-13 06:21:39.409156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.976 [2024-07-13 06:21:39.409173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.976 [2024-07-13 06:21:39.409339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.976 [2024-07-13 06:21:39.409488] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.976 [2024-07-13 06:21:39.409514] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.976 [2024-07-13 06:21:39.409532] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.976 [2024-07-13 06:21:39.411794] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.976 [2024-07-13 06:21:39.421335] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.976 [2024-07-13 06:21:39.421748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.976 [2024-07-13 06:21:39.421898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.976 [2024-07-13 06:21:39.421935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.976 [2024-07-13 06:21:39.421954] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.976 [2024-07-13 06:21:39.422172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.976 [2024-07-13 06:21:39.422359] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.976 [2024-07-13 06:21:39.422385] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.976 [2024-07-13 06:21:39.422401] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.976 [2024-07-13 06:21:39.424721] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.976 [2024-07-13 06:21:39.434360] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.976 [2024-07-13 06:21:39.434707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.976 [2024-07-13 06:21:39.434879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.976 [2024-07-13 06:21:39.434909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.976 [2024-07-13 06:21:39.434928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.976 [2024-07-13 06:21:39.435081] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.976 [2024-07-13 06:21:39.435240] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.976 [2024-07-13 06:21:39.435265] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.976 [2024-07-13 06:21:39.435282] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.976 [2024-07-13 06:21:39.437791] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.976 [2024-07-13 06:21:39.447085] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.976 [2024-07-13 06:21:39.447561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.976 [2024-07-13 06:21:39.447847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.976 [2024-07-13 06:21:39.447916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.976 [2024-07-13 06:21:39.447935] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.976 [2024-07-13 06:21:39.448088] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.976 [2024-07-13 06:21:39.448274] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.976 [2024-07-13 06:21:39.448300] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.976 [2024-07-13 06:21:39.448316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.976 [2024-07-13 06:21:39.450729] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.976 [2024-07-13 06:21:39.459718] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.976 [2024-07-13 06:21:39.460055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.976 [2024-07-13 06:21:39.460249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:32.976 [2024-07-13 06:21:39.460279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:32.976 [2024-07-13 06:21:39.460302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:32.976 [2024-07-13 06:21:39.460461] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:32.976 [2024-07-13 06:21:39.460665] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:32.976 [2024-07-13 06:21:39.460691] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:32.976 [2024-07-13 06:21:39.460707] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:32.976 [2024-07-13 06:21:39.463266] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:32.976 [2024-07-13 06:21:39.472732] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:32.976 [2024-07-13 06:21:39.473130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.236 [2024-07-13 06:21:39.473277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.236 [2024-07-13 06:21:39.473307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.236 [2024-07-13 06:21:39.473325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.236 [2024-07-13 06:21:39.473512] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.236 [2024-07-13 06:21:39.473670] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.236 [2024-07-13 06:21:39.473695] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.236 [2024-07-13 06:21:39.473711] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.236 [2024-07-13 06:21:39.476489] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.236 [2024-07-13 06:21:39.485627] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.236 [2024-07-13 06:21:39.485996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.236 [2024-07-13 06:21:39.486150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.236 [2024-07-13 06:21:39.486181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.236 [2024-07-13 06:21:39.486199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.236 [2024-07-13 06:21:39.486366] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.236 [2024-07-13 06:21:39.486530] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.236 [2024-07-13 06:21:39.486561] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.236 [2024-07-13 06:21:39.486578] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.236 [2024-07-13 06:21:39.489269] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.236 [2024-07-13 06:21:39.498412] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.236 [2024-07-13 06:21:39.498761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.236 [2024-07-13 06:21:39.498973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.236 [2024-07-13 06:21:39.499001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.236 [2024-07-13 06:21:39.499018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.236 [2024-07-13 06:21:39.499176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.236 [2024-07-13 06:21:39.499408] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.236 [2024-07-13 06:21:39.499434] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.236 [2024-07-13 06:21:39.499451] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.236 [2024-07-13 06:21:39.502040] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.236 [2024-07-13 06:21:39.511126] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.236 [2024-07-13 06:21:39.511495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.236 [2024-07-13 06:21:39.511678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.236 [2024-07-13 06:21:39.511725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.237 [2024-07-13 06:21:39.511744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.237 [2024-07-13 06:21:39.512008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.237 [2024-07-13 06:21:39.512124] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.237 [2024-07-13 06:21:39.512171] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.237 [2024-07-13 06:21:39.512186] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.237 [2024-07-13 06:21:39.514671] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.237 [2024-07-13 06:21:39.523776] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.237 [2024-07-13 06:21:39.524101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.524269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.524298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.237 [2024-07-13 06:21:39.524317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.237 [2024-07-13 06:21:39.524433] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.237 [2024-07-13 06:21:39.524660] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.237 [2024-07-13 06:21:39.524686] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.237 [2024-07-13 06:21:39.524703] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.237 [2024-07-13 06:21:39.527220] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.237 [2024-07-13 06:21:39.536416] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.237 [2024-07-13 06:21:39.536782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.536937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.536968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.237 [2024-07-13 06:21:39.536986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.237 [2024-07-13 06:21:39.537135] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.237 [2024-07-13 06:21:39.537303] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.237 [2024-07-13 06:21:39.537333] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.237 [2024-07-13 06:21:39.537351] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.237 [2024-07-13 06:21:39.539780] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.237 [2024-07-13 06:21:39.549148] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.237 [2024-07-13 06:21:39.549582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.549806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.549835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.237 [2024-07-13 06:21:39.549871] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.237 [2024-07-13 06:21:39.550062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.237 [2024-07-13 06:21:39.550196] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.237 [2024-07-13 06:21:39.550234] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.237 [2024-07-13 06:21:39.550251] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.237 [2024-07-13 06:21:39.552684] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.237 [2024-07-13 06:21:39.561938] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.237 [2024-07-13 06:21:39.562383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.562588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.562617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.237 [2024-07-13 06:21:39.562635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.237 [2024-07-13 06:21:39.562829] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.237 [2024-07-13 06:21:39.562961] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.237 [2024-07-13 06:21:39.562988] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.237 [2024-07-13 06:21:39.563004] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.237 [2024-07-13 06:21:39.565292] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.237 [2024-07-13 06:21:39.574557] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.237 [2024-07-13 06:21:39.574892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.575035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.575064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.237 [2024-07-13 06:21:39.575083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.237 [2024-07-13 06:21:39.575264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.237 [2024-07-13 06:21:39.575456] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.237 [2024-07-13 06:21:39.575482] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.237 [2024-07-13 06:21:39.575505] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.237 [2024-07-13 06:21:39.577988] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.237 [2024-07-13 06:21:39.587390] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.237 [2024-07-13 06:21:39.587759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.587962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.587989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.237 [2024-07-13 06:21:39.588006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.237 [2024-07-13 06:21:39.588198] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.237 [2024-07-13 06:21:39.588420] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.237 [2024-07-13 06:21:39.588446] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.237 [2024-07-13 06:21:39.588463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.237 [2024-07-13 06:21:39.591023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.237 [2024-07-13 06:21:39.600178] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.237 [2024-07-13 06:21:39.600494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.600767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.600818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.237 [2024-07-13 06:21:39.600836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.237 [2024-07-13 06:21:39.601002] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.237 [2024-07-13 06:21:39.601197] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.237 [2024-07-13 06:21:39.601234] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.237 [2024-07-13 06:21:39.601252] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.237 [2024-07-13 06:21:39.603753] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.237 [2024-07-13 06:21:39.613036] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.237 [2024-07-13 06:21:39.613443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.613566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.613590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.237 [2024-07-13 06:21:39.613605] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.237 [2024-07-13 06:21:39.613765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.237 [2024-07-13 06:21:39.613956] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.237 [2024-07-13 06:21:39.613984] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.237 [2024-07-13 06:21:39.614000] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.237 [2024-07-13 06:21:39.616497] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.237 [2024-07-13 06:21:39.625966] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.237 [2024-07-13 06:21:39.626478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.626813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.626863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.237 [2024-07-13 06:21:39.626905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.237 [2024-07-13 06:21:39.627095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.237 [2024-07-13 06:21:39.627277] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.237 [2024-07-13 06:21:39.627303] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.237 [2024-07-13 06:21:39.627319] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.237 [2024-07-13 06:21:39.629678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.237 [2024-07-13 06:21:39.638801] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.237 [2024-07-13 06:21:39.639143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.639283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.237 [2024-07-13 06:21:39.639314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.238 [2024-07-13 06:21:39.639333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.238 [2024-07-13 06:21:39.639528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.238 [2024-07-13 06:21:39.639729] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.238 [2024-07-13 06:21:39.639755] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.238 [2024-07-13 06:21:39.639771] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.238 [2024-07-13 06:21:39.642345] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.238 [2024-07-13 06:21:39.651674] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.238 [2024-07-13 06:21:39.652045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.238 [2024-07-13 06:21:39.652268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.238 [2024-07-13 06:21:39.652317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.238 [2024-07-13 06:21:39.652336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.238 [2024-07-13 06:21:39.652530] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.238 [2024-07-13 06:21:39.652693] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.238 [2024-07-13 06:21:39.652720] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.238 [2024-07-13 06:21:39.652736] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.238 [2024-07-13 06:21:39.655248] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.238 [2024-07-13 06:21:39.664316] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.238 [2024-07-13 06:21:39.664766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.238 [2024-07-13 06:21:39.664969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.238 [2024-07-13 06:21:39.665001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.238 [2024-07-13 06:21:39.665020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.238 [2024-07-13 06:21:39.665199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.238 [2024-07-13 06:21:39.665409] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.238 [2024-07-13 06:21:39.665435] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.238 [2024-07-13 06:21:39.665452] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.238 [2024-07-13 06:21:39.667988] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.238 [2024-07-13 06:21:39.677397] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.238 [2024-07-13 06:21:39.677783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.238 [2024-07-13 06:21:39.678005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.238 [2024-07-13 06:21:39.678033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.238 [2024-07-13 06:21:39.678050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.238 [2024-07-13 06:21:39.678206] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.238 [2024-07-13 06:21:39.678342] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.238 [2024-07-13 06:21:39.678368] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.238 [2024-07-13 06:21:39.678386] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.238 [2024-07-13 06:21:39.680910] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.238 [2024-07-13 06:21:39.690192] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.238 [2024-07-13 06:21:39.690519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.238 [2024-07-13 06:21:39.690694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.238 [2024-07-13 06:21:39.690721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.238 [2024-07-13 06:21:39.690738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.238 [2024-07-13 06:21:39.690959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.238 [2024-07-13 06:21:39.691119] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.238 [2024-07-13 06:21:39.691144] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.238 [2024-07-13 06:21:39.691161] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.238 [2024-07-13 06:21:39.693486] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.238 [2024-07-13 06:21:39.702766] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.238 [2024-07-13 06:21:39.703132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.238 [2024-07-13 06:21:39.703330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.238 [2024-07-13 06:21:39.703358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.238 [2024-07-13 06:21:39.703376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.238 [2024-07-13 06:21:39.703575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.238 [2024-07-13 06:21:39.703717] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.238 [2024-07-13 06:21:39.703744] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.238 [2024-07-13 06:21:39.703761] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.238 [2024-07-13 06:21:39.706138] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.238 [2024-07-13 06:21:39.715543] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.238 [2024-07-13 06:21:39.715908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.238 [2024-07-13 06:21:39.716080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.238 [2024-07-13 06:21:39.716111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.238 [2024-07-13 06:21:39.716129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.238 [2024-07-13 06:21:39.716312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.238 [2024-07-13 06:21:39.716492] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.238 [2024-07-13 06:21:39.716519] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.238 [2024-07-13 06:21:39.716535] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.238 [2024-07-13 06:21:39.719230] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.238 [2024-07-13 06:21:39.728307] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.238 [2024-07-13 06:21:39.728620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.238 [2024-07-13 06:21:39.728787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.238 [2024-07-13 06:21:39.728817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.238 [2024-07-13 06:21:39.728835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.238 [2024-07-13 06:21:39.729061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.238 [2024-07-13 06:21:39.729262] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.238 [2024-07-13 06:21:39.729288] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.238 [2024-07-13 06:21:39.729305] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.238 [2024-07-13 06:21:39.731818] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.238 [2024-07-13 06:21:39.741205] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.238 [2024-07-13 06:21:39.741596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.238 [2024-07-13 06:21:39.741728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.238 [2024-07-13 06:21:39.741758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.238 [2024-07-13 06:21:39.741775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.238 [2024-07-13 06:21:39.741945] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.238 [2024-07-13 06:21:39.742087] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.238 [2024-07-13 06:21:39.742113] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.238 [2024-07-13 06:21:39.742130] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.238 [2024-07-13 06:21:39.744647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.498 [2024-07-13 06:21:39.754015] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.498 [2024-07-13 06:21:39.754361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.498 [2024-07-13 06:21:39.754610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.498 [2024-07-13 06:21:39.754670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.498 [2024-07-13 06:21:39.754689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.498 [2024-07-13 06:21:39.754877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.498 [2024-07-13 06:21:39.755068] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.498 [2024-07-13 06:21:39.755096] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.498 [2024-07-13 06:21:39.755112] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.498 [2024-07-13 06:21:39.757619] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.498 [2024-07-13 06:21:39.766733] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.498 [2024-07-13 06:21:39.767080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.498 [2024-07-13 06:21:39.767251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.498 [2024-07-13 06:21:39.767301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.498 [2024-07-13 06:21:39.767320] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.498 [2024-07-13 06:21:39.767536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.498 [2024-07-13 06:21:39.767699] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.498 [2024-07-13 06:21:39.767724] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.498 [2024-07-13 06:21:39.767740] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.498 [2024-07-13 06:21:39.770291] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.498 [2024-07-13 06:21:39.779825] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.498 [2024-07-13 06:21:39.780274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.498 [2024-07-13 06:21:39.780516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.498 [2024-07-13 06:21:39.780564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.498 [2024-07-13 06:21:39.780593] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.498 [2024-07-13 06:21:39.780710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.498 [2024-07-13 06:21:39.780896] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.498 [2024-07-13 06:21:39.780923] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.498 [2024-07-13 06:21:39.780941] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.498 [2024-07-13 06:21:39.783560] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.498 [2024-07-13 06:21:39.792646] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.498 [2024-07-13 06:21:39.793011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.498 [2024-07-13 06:21:39.793173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.498 [2024-07-13 06:21:39.793204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.498 [2024-07-13 06:21:39.793222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.498 [2024-07-13 06:21:39.793381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.498 [2024-07-13 06:21:39.793590] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.498 [2024-07-13 06:21:39.793617] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.498 [2024-07-13 06:21:39.793633] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.498 [2024-07-13 06:21:39.796311] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.498 [2024-07-13 06:21:39.805310] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.498 [2024-07-13 06:21:39.805733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.498 [2024-07-13 06:21:39.805944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.498 [2024-07-13 06:21:39.805972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.498 [2024-07-13 06:21:39.805989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.498 [2024-07-13 06:21:39.806155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.498 [2024-07-13 06:21:39.806357] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.498 [2024-07-13 06:21:39.806384] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.498 [2024-07-13 06:21:39.806402] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.498 [2024-07-13 06:21:39.808837] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.498 [2024-07-13 06:21:39.818069] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.498 [2024-07-13 06:21:39.818531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.498 [2024-07-13 06:21:39.818726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.498 [2024-07-13 06:21:39.818754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.498 [2024-07-13 06:21:39.818772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.498 [2024-07-13 06:21:39.818967] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.498 [2024-07-13 06:21:39.819126] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.498 [2024-07-13 06:21:39.819153] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.498 [2024-07-13 06:21:39.819170] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.498 [2024-07-13 06:21:39.821672] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.498 [2024-07-13 06:21:39.831052] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.498 [2024-07-13 06:21:39.831350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.498 [2024-07-13 06:21:39.831630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.498 [2024-07-13 06:21:39.831683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.498 [2024-07-13 06:21:39.831702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.498 [2024-07-13 06:21:39.831842] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.498 [2024-07-13 06:21:39.832093] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.498 [2024-07-13 06:21:39.832120] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.498 [2024-07-13 06:21:39.832137] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.498 [2024-07-13 06:21:39.834648] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.498 [2024-07-13 06:21:39.843656] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.498 [2024-07-13 06:21:39.844083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.498 [2024-07-13 06:21:39.844251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.498 [2024-07-13 06:21:39.844279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.498 [2024-07-13 06:21:39.844297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.498 [2024-07-13 06:21:39.844492] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.498 [2024-07-13 06:21:39.844678] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.498 [2024-07-13 06:21:39.844704] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.498 [2024-07-13 06:21:39.844721] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.498 [2024-07-13 06:21:39.846991] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.498 [2024-07-13 06:21:39.856559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.498 [2024-07-13 06:21:39.856914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.498 [2024-07-13 06:21:39.857047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.498 [2024-07-13 06:21:39.857075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.498 [2024-07-13 06:21:39.857093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.498 [2024-07-13 06:21:39.857259] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.498 [2024-07-13 06:21:39.857468] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.499 [2024-07-13 06:21:39.857494] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.499 [2024-07-13 06:21:39.857511] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.499 [2024-07-13 06:21:39.860061] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.499 [2024-07-13 06:21:39.869398] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.499 [2024-07-13 06:21:39.869774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.869927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.869953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.499 [2024-07-13 06:21:39.869969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.499 [2024-07-13 06:21:39.870173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.499 [2024-07-13 06:21:39.870335] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.499 [2024-07-13 06:21:39.870362] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.499 [2024-07-13 06:21:39.870379] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.499 [2024-07-13 06:21:39.872997] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.499 [2024-07-13 06:21:39.882250] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.499 [2024-07-13 06:21:39.882525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.882692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.882722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.499 [2024-07-13 06:21:39.882740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.499 [2024-07-13 06:21:39.882959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.499 [2024-07-13 06:21:39.883095] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.499 [2024-07-13 06:21:39.883121] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.499 [2024-07-13 06:21:39.883137] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.499 [2024-07-13 06:21:39.885656] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.499 [2024-07-13 06:21:39.895233] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.499 [2024-07-13 06:21:39.895628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.895786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.895812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.499 [2024-07-13 06:21:39.895844] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.499 [2024-07-13 06:21:39.896044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.499 [2024-07-13 06:21:39.896244] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.499 [2024-07-13 06:21:39.896276] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.499 [2024-07-13 06:21:39.896294] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.499 [2024-07-13 06:21:39.898620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.499 [2024-07-13 06:21:39.908090] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.499 [2024-07-13 06:21:39.908502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.908728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.908782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.499 [2024-07-13 06:21:39.908799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.499 [2024-07-13 06:21:39.908999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.499 [2024-07-13 06:21:39.909204] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.499 [2024-07-13 06:21:39.909232] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.499 [2024-07-13 06:21:39.909249] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.499 [2024-07-13 06:21:39.911653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.499 [2024-07-13 06:21:39.920664] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.499 [2024-07-13 06:21:39.920986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.921191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.921251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.499 [2024-07-13 06:21:39.921269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.499 [2024-07-13 06:21:39.921406] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.499 [2024-07-13 06:21:39.921604] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.499 [2024-07-13 06:21:39.921631] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.499 [2024-07-13 06:21:39.921648] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.499 [2024-07-13 06:21:39.924163] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.499 [2024-07-13 06:21:39.933306] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.499 [2024-07-13 06:21:39.933648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.933818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.933849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.499 [2024-07-13 06:21:39.933879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.499 [2024-07-13 06:21:39.934054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.499 [2024-07-13 06:21:39.934257] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.499 [2024-07-13 06:21:39.934284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.499 [2024-07-13 06:21:39.934306] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.499 [2024-07-13 06:21:39.936706] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.499 [2024-07-13 06:21:39.946023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.499 [2024-07-13 06:21:39.946360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.946552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.946581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.499 [2024-07-13 06:21:39.946599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.499 [2024-07-13 06:21:39.946776] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.499 [2024-07-13 06:21:39.946971] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.499 [2024-07-13 06:21:39.946999] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.499 [2024-07-13 06:21:39.947016] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.499 [2024-07-13 06:21:39.949626] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.499 [2024-07-13 06:21:39.958917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.499 [2024-07-13 06:21:39.959284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.959506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.959559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.499 [2024-07-13 06:21:39.959578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.499 [2024-07-13 06:21:39.959750] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.499 [2024-07-13 06:21:39.960003] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.499 [2024-07-13 06:21:39.960030] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.499 [2024-07-13 06:21:39.960047] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.499 [2024-07-13 06:21:39.962540] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.499 [2024-07-13 06:21:39.971584] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.499 [2024-07-13 06:21:39.971920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.972113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.972142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.499 [2024-07-13 06:21:39.972161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.499 [2024-07-13 06:21:39.972356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.499 [2024-07-13 06:21:39.972578] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.499 [2024-07-13 06:21:39.972604] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.499 [2024-07-13 06:21:39.972621] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.499 [2024-07-13 06:21:39.975074] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.499 [2024-07-13 06:21:39.984478] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.499 [2024-07-13 06:21:39.984907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.985096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.499 [2024-07-13 06:21:39.985125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.499 [2024-07-13 06:21:39.985143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.499 [2024-07-13 06:21:39.985351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.500 [2024-07-13 06:21:39.985534] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.500 [2024-07-13 06:21:39.985562] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.500 [2024-07-13 06:21:39.985578] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.500 [2024-07-13 06:21:39.988047] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.500 [2024-07-13 06:21:39.997332] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.500 [2024-07-13 06:21:39.997710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.500 [2024-07-13 06:21:39.997901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.500 [2024-07-13 06:21:39.997932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.500 [2024-07-13 06:21:39.997951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.500 [2024-07-13 06:21:39.998118] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.500 [2024-07-13 06:21:39.998323] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.500 [2024-07-13 06:21:39.998350] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.500 [2024-07-13 06:21:39.998367] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.500 [2024-07-13 06:21:40.000854] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.758 [2024-07-13 06:21:40.010212] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.758 [2024-07-13 06:21:40.010714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.758 [2024-07-13 06:21:40.010999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.758 [2024-07-13 06:21:40.011031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.758 [2024-07-13 06:21:40.011050] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.758 [2024-07-13 06:21:40.011199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.758 [2024-07-13 06:21:40.011316] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.758 [2024-07-13 06:21:40.011343] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.758 [2024-07-13 06:21:40.011361] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.758 [2024-07-13 06:21:40.014014] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.758 [2024-07-13 06:21:40.022858] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.758 [2024-07-13 06:21:40.023223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.758 [2024-07-13 06:21:40.023451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.758 [2024-07-13 06:21:40.023482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.758 [2024-07-13 06:21:40.023501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.758 [2024-07-13 06:21:40.023619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.758 [2024-07-13 06:21:40.023795] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.758 [2024-07-13 06:21:40.023822] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.758 [2024-07-13 06:21:40.023839] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.759 [2024-07-13 06:21:40.026386] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.759 [2024-07-13 06:21:40.035597] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.759 [2024-07-13 06:21:40.035965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.036115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.036141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.759 [2024-07-13 06:21:40.036158] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.759 [2024-07-13 06:21:40.036336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.759 [2024-07-13 06:21:40.036550] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.759 [2024-07-13 06:21:40.036577] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.759 [2024-07-13 06:21:40.036594] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.759 [2024-07-13 06:21:40.038964] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.759 [2024-07-13 06:21:40.048169] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.759 [2024-07-13 06:21:40.048550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.048764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.048794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.759 [2024-07-13 06:21:40.048812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.759 [2024-07-13 06:21:40.048964] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.759 [2024-07-13 06:21:40.049166] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.759 [2024-07-13 06:21:40.049188] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.759 [2024-07-13 06:21:40.049202] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.759 [2024-07-13 06:21:40.051845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.759 [2024-07-13 06:21:40.061248] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.759 [2024-07-13 06:21:40.061733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.061969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.061997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.759 [2024-07-13 06:21:40.062014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.759 [2024-07-13 06:21:40.062169] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.759 [2024-07-13 06:21:40.062386] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.759 [2024-07-13 06:21:40.062413] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.759 [2024-07-13 06:21:40.062430] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.759 [2024-07-13 06:21:40.064812] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.759 [2024-07-13 06:21:40.073763] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.759 [2024-07-13 06:21:40.074181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.074392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.074418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.759 [2024-07-13 06:21:40.074435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.759 [2024-07-13 06:21:40.074619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.759 [2024-07-13 06:21:40.074808] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.759 [2024-07-13 06:21:40.074833] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.759 [2024-07-13 06:21:40.074850] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.759 [2024-07-13 06:21:40.077177] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.759 [2024-07-13 06:21:40.086333] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.759 [2024-07-13 06:21:40.086699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.086909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.086941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.759 [2024-07-13 06:21:40.086959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.759 [2024-07-13 06:21:40.087108] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.759 [2024-07-13 06:21:40.087334] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.759 [2024-07-13 06:21:40.087361] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.759 [2024-07-13 06:21:40.087377] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.759 [2024-07-13 06:21:40.089788] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.759 [2024-07-13 06:21:40.099267] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.759 [2024-07-13 06:21:40.099612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.099772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.099805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.759 [2024-07-13 06:21:40.099823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.759 [2024-07-13 06:21:40.099993] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.759 [2024-07-13 06:21:40.100170] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.759 [2024-07-13 06:21:40.100195] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.759 [2024-07-13 06:21:40.100211] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.759 [2024-07-13 06:21:40.102670] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.759 [2024-07-13 06:21:40.111998] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.759 [2024-07-13 06:21:40.112332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.112513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.112539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.759 [2024-07-13 06:21:40.112555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.759 [2024-07-13 06:21:40.112752] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.759 [2024-07-13 06:21:40.112968] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.759 [2024-07-13 06:21:40.112996] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.759 [2024-07-13 06:21:40.113012] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.759 [2024-07-13 06:21:40.115511] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.759 [2024-07-13 06:21:40.124592] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.759 [2024-07-13 06:21:40.124983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.125187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.125212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.759 [2024-07-13 06:21:40.125228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.759 [2024-07-13 06:21:40.125427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.759 [2024-07-13 06:21:40.125641] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.759 [2024-07-13 06:21:40.125668] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.759 [2024-07-13 06:21:40.125685] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.759 [2024-07-13 06:21:40.128288] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.759 [2024-07-13 06:21:40.137231] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.759 [2024-07-13 06:21:40.137544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.137685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.137712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.759 [2024-07-13 06:21:40.137736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.759 [2024-07-13 06:21:40.137949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.759 [2024-07-13 06:21:40.138084] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.759 [2024-07-13 06:21:40.138110] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.759 [2024-07-13 06:21:40.138127] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.759 [2024-07-13 06:21:40.140733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.759 [2024-07-13 06:21:40.150184] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.759 [2024-07-13 06:21:40.150572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.150737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.759 [2024-07-13 06:21:40.150765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.759 [2024-07-13 06:21:40.150782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.759 [2024-07-13 06:21:40.150964] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.759 [2024-07-13 06:21:40.151164] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.759 [2024-07-13 06:21:40.151190] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.760 [2024-07-13 06:21:40.151207] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.760 [2024-07-13 06:21:40.153719] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.760 [2024-07-13 06:21:40.163015] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.760 [2024-07-13 06:21:40.163341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.760 [2024-07-13 06:21:40.163505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.760 [2024-07-13 06:21:40.163535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.760 [2024-07-13 06:21:40.163553] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.760 [2024-07-13 06:21:40.163725] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.760 [2024-07-13 06:21:40.163941] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.760 [2024-07-13 06:21:40.163969] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.760 [2024-07-13 06:21:40.163986] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.760 [2024-07-13 06:21:40.166570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.760 [2024-07-13 06:21:40.175907] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.760 [2024-07-13 06:21:40.176204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.760 [2024-07-13 06:21:40.176424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.760 [2024-07-13 06:21:40.176476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.760 [2024-07-13 06:21:40.176494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.760 [2024-07-13 06:21:40.176694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.760 [2024-07-13 06:21:40.176875] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.760 [2024-07-13 06:21:40.176903] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.760 [2024-07-13 06:21:40.176920] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.760 [2024-07-13 06:21:40.179242] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.760 [2024-07-13 06:21:40.188637] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.760 [2024-07-13 06:21:40.189002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.760 [2024-07-13 06:21:40.189165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.760 [2024-07-13 06:21:40.189190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.760 [2024-07-13 06:21:40.189206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.760 [2024-07-13 06:21:40.189364] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.760 [2024-07-13 06:21:40.189539] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.760 [2024-07-13 06:21:40.189566] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.760 [2024-07-13 06:21:40.189583] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.760 [2024-07-13 06:21:40.192052] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.760 [2024-07-13 06:21:40.201219] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.760 [2024-07-13 06:21:40.201569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.760 [2024-07-13 06:21:40.201710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.760 [2024-07-13 06:21:40.201738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.760 [2024-07-13 06:21:40.201755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.760 [2024-07-13 06:21:40.201902] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.760 [2024-07-13 06:21:40.202098] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.760 [2024-07-13 06:21:40.202124] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.760 [2024-07-13 06:21:40.202140] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.760 [2024-07-13 06:21:40.204690] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.760 [2024-07-13 06:21:40.214193] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.760 [2024-07-13 06:21:40.214580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.760 [2024-07-13 06:21:40.214773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.760 [2024-07-13 06:21:40.214803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.760 [2024-07-13 06:21:40.214821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.760 [2024-07-13 06:21:40.214950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.760 [2024-07-13 06:21:40.215147] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.760 [2024-07-13 06:21:40.215173] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.760 [2024-07-13 06:21:40.215189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.760 [2024-07-13 06:21:40.217607] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.760 [2024-07-13 06:21:40.227054] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.760 [2024-07-13 06:21:40.227426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.760 [2024-07-13 06:21:40.227689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.760 [2024-07-13 06:21:40.227745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.760 [2024-07-13 06:21:40.227763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.760 [2024-07-13 06:21:40.227954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.760 [2024-07-13 06:21:40.228129] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.760 [2024-07-13 06:21:40.228157] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.760 [2024-07-13 06:21:40.228174] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.760 [2024-07-13 06:21:40.230600] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.760 [2024-07-13 06:21:40.239791] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.760 [2024-07-13 06:21:40.240123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.760 [2024-07-13 06:21:40.240348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.760 [2024-07-13 06:21:40.240417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.760 [2024-07-13 06:21:40.240456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.760 [2024-07-13 06:21:40.240696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.760 [2024-07-13 06:21:40.240909] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.760 [2024-07-13 06:21:40.240937] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.760 [2024-07-13 06:21:40.240954] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.760 [2024-07-13 06:21:40.243371] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.760 [2024-07-13 06:21:40.252507] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.760 [2024-07-13 06:21:40.252876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.760 [2024-07-13 06:21:40.253032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.760 [2024-07-13 06:21:40.253057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.760 [2024-07-13 06:21:40.253088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.760 [2024-07-13 06:21:40.253277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.760 [2024-07-13 06:21:40.253439] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.760 [2024-07-13 06:21:40.253467] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.760 [2024-07-13 06:21:40.253489] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.760 [2024-07-13 06:21:40.255997] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.760 [2024-07-13 06:21:40.265206] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.760 [2024-07-13 06:21:40.265632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.760 [2024-07-13 06:21:40.265794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.760 [2024-07-13 06:21:40.265822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:33.760 [2024-07-13 06:21:40.265838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:33.760 [2024-07-13 06:21:40.266030] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:33.760 [2024-07-13 06:21:40.266293] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.760 [2024-07-13 06:21:40.266321] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.760 [2024-07-13 06:21:40.266339] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.019 [2024-07-13 06:21:40.268949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.019 [2024-07-13 06:21:40.277878] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.019 [2024-07-13 06:21:40.278197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.019 [2024-07-13 06:21:40.278386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.019 [2024-07-13 06:21:40.278411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.019 [2024-07-13 06:21:40.278428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.020 [2024-07-13 06:21:40.278618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.020 [2024-07-13 06:21:40.278799] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.020 [2024-07-13 06:21:40.278826] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.020 [2024-07-13 06:21:40.278843] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.020 [2024-07-13 06:21:40.281357] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.020 [2024-07-13 06:21:40.290704] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.020 [2024-07-13 06:21:40.291045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.020 [2024-07-13 06:21:40.291209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.020 [2024-07-13 06:21:40.291239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.020 [2024-07-13 06:21:40.291257] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.020 [2024-07-13 06:21:40.291416] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.020 [2024-07-13 06:21:40.291601] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.020 [2024-07-13 06:21:40.291629] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.020 [2024-07-13 06:21:40.291646] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.020 [2024-07-13 06:21:40.294212] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.020 [2024-07-13 06:21:40.303327] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.020 [2024-07-13 06:21:40.303835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.020 [2024-07-13 06:21:40.304024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.020 [2024-07-13 06:21:40.304052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.020 [2024-07-13 06:21:40.304070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.020 [2024-07-13 06:21:40.304259] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.020 [2024-07-13 06:21:40.304399] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.020 [2024-07-13 06:21:40.304425] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.020 [2024-07-13 06:21:40.304442] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.020 [2024-07-13 06:21:40.307026] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.020 [2024-07-13 06:21:40.316167] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.020 [2024-07-13 06:21:40.316569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.020 [2024-07-13 06:21:40.316744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.020 [2024-07-13 06:21:40.316772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.020 [2024-07-13 06:21:40.316790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.020 [2024-07-13 06:21:40.317000] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.020 [2024-07-13 06:21:40.317181] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.020 [2024-07-13 06:21:40.317207] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.020 [2024-07-13 06:21:40.317224] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.020 [2024-07-13 06:21:40.319633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.020 [2024-07-13 06:21:40.328780] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.020 [2024-07-13 06:21:40.329057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.020 [2024-07-13 06:21:40.329318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.020 [2024-07-13 06:21:40.329373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.020 [2024-07-13 06:21:40.329391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.020 [2024-07-13 06:21:40.329548] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.020 [2024-07-13 06:21:40.329747] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.020 [2024-07-13 06:21:40.329773] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.020 [2024-07-13 06:21:40.329789] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.020 [2024-07-13 06:21:40.332134] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.020 [2024-07-13 06:21:40.341536] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.020 [2024-07-13 06:21:40.341965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.020 [2024-07-13 06:21:40.342132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.020 [2024-07-13 06:21:40.342162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.020 [2024-07-13 06:21:40.342180] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.020 [2024-07-13 06:21:40.342390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.020 [2024-07-13 06:21:40.342590] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.020 [2024-07-13 06:21:40.342617] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.020 [2024-07-13 06:21:40.342633] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.020 [2024-07-13 06:21:40.345097] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.020 [2024-07-13 06:21:40.354132] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.020 [2024-07-13 06:21:40.354496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.020 [2024-07-13 06:21:40.354685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.020 [2024-07-13 06:21:40.354713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.020 [2024-07-13 06:21:40.354731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.020 [2024-07-13 06:21:40.354991] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.020 [2024-07-13 06:21:40.355169] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.020 [2024-07-13 06:21:40.355196] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.020 [2024-07-13 06:21:40.355213] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.020 [2024-07-13 06:21:40.357621] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.020 [2024-07-13 06:21:40.366973] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.020 [2024-07-13 06:21:40.367318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.020 [2024-07-13 06:21:40.367590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.020 [2024-07-13 06:21:40.367641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.020 [2024-07-13 06:21:40.367659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.020 [2024-07-13 06:21:40.367826] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.020 [2024-07-13 06:21:40.368001] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.020 [2024-07-13 06:21:40.368028] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.020 [2024-07-13 06:21:40.368044] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.020 [2024-07-13 06:21:40.370547] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.020 [2024-07-13 06:21:40.379920] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.020 [2024-07-13 06:21:40.380258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.021 [2024-07-13 06:21:40.380419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.021 [2024-07-13 06:21:40.380447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.021 [2024-07-13 06:21:40.380465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.021 [2024-07-13 06:21:40.380663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.021 [2024-07-13 06:21:40.380885] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.021 [2024-07-13 06:21:40.380913] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.021 [2024-07-13 06:21:40.380930] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.021 [2024-07-13 06:21:40.383361] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.021 [2024-07-13 06:21:40.392553] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.021 [2024-07-13 06:21:40.392917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.021 [2024-07-13 06:21:40.393123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.021 [2024-07-13 06:21:40.393188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.021 [2024-07-13 06:21:40.393206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.021 [2024-07-13 06:21:40.393429] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.021 [2024-07-13 06:21:40.393566] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.021 [2024-07-13 06:21:40.393593] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.021 [2024-07-13 06:21:40.393609] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.021 [2024-07-13 06:21:40.395946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.021 [2024-07-13 06:21:40.405197] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.021 [2024-07-13 06:21:40.405526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.021 [2024-07-13 06:21:40.405715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.021 [2024-07-13 06:21:40.405743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.021 [2024-07-13 06:21:40.405761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.021 [2024-07-13 06:21:40.405979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.021 [2024-07-13 06:21:40.406201] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.021 [2024-07-13 06:21:40.406229] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.021 [2024-07-13 06:21:40.406246] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.021 [2024-07-13 06:21:40.408754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.021 [2024-07-13 06:21:40.417783] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.021 [2024-07-13 06:21:40.418135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.021 [2024-07-13 06:21:40.418373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.021 [2024-07-13 06:21:40.418430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.021 [2024-07-13 06:21:40.418449] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.021 [2024-07-13 06:21:40.418567] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.021 [2024-07-13 06:21:40.418724] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.021 [2024-07-13 06:21:40.418749] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.021 [2024-07-13 06:21:40.418766] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.021 [2024-07-13 06:21:40.421171] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.021 [2024-07-13 06:21:40.430551] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.021 [2024-07-13 06:21:40.430902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.021 [2024-07-13 06:21:40.431220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.021 [2024-07-13 06:21:40.431276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.021 [2024-07-13 06:21:40.431295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.021 [2024-07-13 06:21:40.431535] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.021 [2024-07-13 06:21:40.431771] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.021 [2024-07-13 06:21:40.431798] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.021 [2024-07-13 06:21:40.431815] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.021 [2024-07-13 06:21:40.434059] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.021 [2024-07-13 06:21:40.443231] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.021 [2024-07-13 06:21:40.443569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.021 [2024-07-13 06:21:40.443738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.021 [2024-07-13 06:21:40.443768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.021 [2024-07-13 06:21:40.443786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.021 [2024-07-13 06:21:40.443999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.021 [2024-07-13 06:21:40.444140] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.021 [2024-07-13 06:21:40.444168] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.021 [2024-07-13 06:21:40.444185] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.021 [2024-07-13 06:21:40.446660] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.021 [2024-07-13 06:21:40.455826] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.021 [2024-07-13 06:21:40.456178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.021 [2024-07-13 06:21:40.456378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.021 [2024-07-13 06:21:40.456408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.021 [2024-07-13 06:21:40.456432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.021 [2024-07-13 06:21:40.456568] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.021 [2024-07-13 06:21:40.456767] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.021 [2024-07-13 06:21:40.456793] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.021 [2024-07-13 06:21:40.456809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.021 [2024-07-13 06:21:40.459377] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.021 [2024-07-13 06:21:40.468586] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.021 [2024-07-13 06:21:40.468884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.021 [2024-07-13 06:21:40.469052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.021 [2024-07-13 06:21:40.469083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.021 [2024-07-13 06:21:40.469101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.021 [2024-07-13 06:21:40.469306] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.021 [2024-07-13 06:21:40.469493] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.021 [2024-07-13 06:21:40.469519] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.021 [2024-07-13 06:21:40.469535] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.021 [2024-07-13 06:21:40.471895] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.021 [2024-07-13 06:21:40.481485] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.021 [2024-07-13 06:21:40.481860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.021 [2024-07-13 06:21:40.482070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.021 [2024-07-13 06:21:40.482099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.022 [2024-07-13 06:21:40.482116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.022 [2024-07-13 06:21:40.482288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.022 [2024-07-13 06:21:40.482488] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.022 [2024-07-13 06:21:40.482515] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.022 [2024-07-13 06:21:40.482531] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.022 [2024-07-13 06:21:40.485142] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.022 [2024-07-13 06:21:40.494433] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.022 [2024-07-13 06:21:40.494731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.022 [2024-07-13 06:21:40.494921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.022 [2024-07-13 06:21:40.494953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.022 [2024-07-13 06:21:40.494972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.022 [2024-07-13 06:21:40.495223] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.022 [2024-07-13 06:21:40.495378] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.022 [2024-07-13 06:21:40.495405] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.022 [2024-07-13 06:21:40.495421] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.022 [2024-07-13 06:21:40.497982] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.022 [2024-07-13 06:21:40.507341] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.022 [2024-07-13 06:21:40.507721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.022 [2024-07-13 06:21:40.507857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.022 [2024-07-13 06:21:40.507894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.022 [2024-07-13 06:21:40.507911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.022 [2024-07-13 06:21:40.508072] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.022 [2024-07-13 06:21:40.508267] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.022 [2024-07-13 06:21:40.508293] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.022 [2024-07-13 06:21:40.508309] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.022 [2024-07-13 06:21:40.510596] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.022 [2024-07-13 06:21:40.519939] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.022 [2024-07-13 06:21:40.520414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.022 [2024-07-13 06:21:40.520645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.022 [2024-07-13 06:21:40.520699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.022 [2024-07-13 06:21:40.520717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.022 [2024-07-13 06:21:40.520889] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.022 [2024-07-13 06:21:40.521140] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.022 [2024-07-13 06:21:40.521168] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.022 [2024-07-13 06:21:40.521186] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.022 [2024-07-13 06:21:40.523860] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.280 [2024-07-13 06:21:40.532851] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.280 [2024-07-13 06:21:40.533233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.280 [2024-07-13 06:21:40.533425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.280 [2024-07-13 06:21:40.533473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.280 [2024-07-13 06:21:40.533492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.280 [2024-07-13 06:21:40.533691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.280 [2024-07-13 06:21:40.533896] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.280 [2024-07-13 06:21:40.533934] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.280 [2024-07-13 06:21:40.533958] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.280 [2024-07-13 06:21:40.536482] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.280 [2024-07-13 06:21:40.545624] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.280 [2024-07-13 06:21:40.546015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.280 [2024-07-13 06:21:40.546177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.280 [2024-07-13 06:21:40.546203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.280 [2024-07-13 06:21:40.546234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.280 [2024-07-13 06:21:40.546388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.280 [2024-07-13 06:21:40.546605] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.280 [2024-07-13 06:21:40.546632] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.280 [2024-07-13 06:21:40.546650] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.280 [2024-07-13 06:21:40.549009] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.280 [2024-07-13 06:21:40.558590] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.280 [2024-07-13 06:21:40.558952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.280 [2024-07-13 06:21:40.559094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.280 [2024-07-13 06:21:40.559123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.280 [2024-07-13 06:21:40.559141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.280 [2024-07-13 06:21:40.559299] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.280 [2024-07-13 06:21:40.559470] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.280 [2024-07-13 06:21:40.559496] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.280 [2024-07-13 06:21:40.559512] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.280 [2024-07-13 06:21:40.561827] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.280 [2024-07-13 06:21:40.571072] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.280 [2024-07-13 06:21:40.571352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.280 [2024-07-13 06:21:40.571508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.280 [2024-07-13 06:21:40.571535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.280 [2024-07-13 06:21:40.571552] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.280 [2024-07-13 06:21:40.571678] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.280 [2024-07-13 06:21:40.571864] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.280 [2024-07-13 06:21:40.571943] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.280 [2024-07-13 06:21:40.571959] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.280 [2024-07-13 06:21:40.574143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.280 [2024-07-13 06:21:40.583422] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.280 [2024-07-13 06:21:40.583759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.280 [2024-07-13 06:21:40.583946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.280 [2024-07-13 06:21:40.583974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.280 [2024-07-13 06:21:40.583990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.280 [2024-07-13 06:21:40.584129] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.280 [2024-07-13 06:21:40.584313] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.280 [2024-07-13 06:21:40.584335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.280 [2024-07-13 06:21:40.584348] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.280 [2024-07-13 06:21:40.586523] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.280 [2024-07-13 06:21:40.596145] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.280 [2024-07-13 06:21:40.596535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.280 [2024-07-13 06:21:40.596718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.280 [2024-07-13 06:21:40.596744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.280 [2024-07-13 06:21:40.596760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.280 [2024-07-13 06:21:40.596942] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.280 [2024-07-13 06:21:40.597059] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.280 [2024-07-13 06:21:40.597085] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.280 [2024-07-13 06:21:40.597101] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.280 [2024-07-13 06:21:40.599656] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.280 [2024-07-13 06:21:40.608838] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.280 [2024-07-13 06:21:40.609273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.280 [2024-07-13 06:21:40.609466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.280 [2024-07-13 06:21:40.609495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.280 [2024-07-13 06:21:40.609513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.281 [2024-07-13 06:21:40.609717] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.281 [2024-07-13 06:21:40.609886] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.281 [2024-07-13 06:21:40.609913] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.281 [2024-07-13 06:21:40.609935] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.281 [2024-07-13 06:21:40.612643] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.281 [2024-07-13 06:21:40.621380] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.281 [2024-07-13 06:21:40.621732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.621924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.621955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.281 [2024-07-13 06:21:40.621974] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.281 [2024-07-13 06:21:40.622179] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.281 [2024-07-13 06:21:40.622366] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.281 [2024-07-13 06:21:40.622392] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.281 [2024-07-13 06:21:40.622409] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.281 [2024-07-13 06:21:40.624886] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.281 [2024-07-13 06:21:40.634224] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.281 [2024-07-13 06:21:40.634653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.634837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.634863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.281 [2024-07-13 06:21:40.634905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.281 [2024-07-13 06:21:40.635059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.281 [2024-07-13 06:21:40.635217] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.281 [2024-07-13 06:21:40.635243] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.281 [2024-07-13 06:21:40.635260] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.281 [2024-07-13 06:21:40.637713] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.281 [2024-07-13 06:21:40.646925] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.281 [2024-07-13 06:21:40.647386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.647518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.647545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.281 [2024-07-13 06:21:40.647563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.281 [2024-07-13 06:21:40.647815] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.281 [2024-07-13 06:21:40.648025] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.281 [2024-07-13 06:21:40.648051] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.281 [2024-07-13 06:21:40.648068] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.281 [2024-07-13 06:21:40.650610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.281 [2024-07-13 06:21:40.659644] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.281 [2024-07-13 06:21:40.660111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.660377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.660422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.281 [2024-07-13 06:21:40.660440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.281 [2024-07-13 06:21:40.660611] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.281 [2024-07-13 06:21:40.660768] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.281 [2024-07-13 06:21:40.660793] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.281 [2024-07-13 06:21:40.660810] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.281 [2024-07-13 06:21:40.663294] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.281 [2024-07-13 06:21:40.672289] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.281 [2024-07-13 06:21:40.672634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.672773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.672802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.281 [2024-07-13 06:21:40.672820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.281 [2024-07-13 06:21:40.673087] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.281 [2024-07-13 06:21:40.673305] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.281 [2024-07-13 06:21:40.673331] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.281 [2024-07-13 06:21:40.673348] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.281 [2024-07-13 06:21:40.675873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.281 [2024-07-13 06:21:40.685006] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.281 [2024-07-13 06:21:40.685476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.685768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.685819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.281 [2024-07-13 06:21:40.685837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.281 [2024-07-13 06:21:40.686091] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.281 [2024-07-13 06:21:40.686286] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.281 [2024-07-13 06:21:40.686311] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.281 [2024-07-13 06:21:40.686327] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.281 [2024-07-13 06:21:40.688841] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.281 [2024-07-13 06:21:40.697889] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.281 [2024-07-13 06:21:40.698321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.698550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.698597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.281 [2024-07-13 06:21:40.698615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.281 [2024-07-13 06:21:40.698732] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.281 [2024-07-13 06:21:40.698903] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.281 [2024-07-13 06:21:40.698929] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.281 [2024-07-13 06:21:40.698946] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.281 [2024-07-13 06:21:40.701523] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.281 [2024-07-13 06:21:40.710402] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.281 [2024-07-13 06:21:40.710771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.710976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.711005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.281 [2024-07-13 06:21:40.711022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.281 [2024-07-13 06:21:40.711177] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.281 [2024-07-13 06:21:40.711376] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.281 [2024-07-13 06:21:40.711404] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.281 [2024-07-13 06:21:40.711425] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.281 [2024-07-13 06:21:40.714024] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.281 [2024-07-13 06:21:40.722977] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.281 [2024-07-13 06:21:40.723352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.723549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.723578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.281 [2024-07-13 06:21:40.723595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.281 [2024-07-13 06:21:40.723775] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.281 [2024-07-13 06:21:40.724006] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.281 [2024-07-13 06:21:40.724031] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.281 [2024-07-13 06:21:40.724046] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.281 [2024-07-13 06:21:40.726653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.281 [2024-07-13 06:21:40.735770] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.281 [2024-07-13 06:21:40.736040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.736195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.736221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.281 [2024-07-13 06:21:40.736247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.281 [2024-07-13 06:21:40.736427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.281 [2024-07-13 06:21:40.736627] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.281 [2024-07-13 06:21:40.736653] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.281 [2024-07-13 06:21:40.736670] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.281 [2024-07-13 06:21:40.739074] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.281 [2024-07-13 06:21:40.748482] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.281 [2024-07-13 06:21:40.748797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.748985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.749013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.281 [2024-07-13 06:21:40.749029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.281 [2024-07-13 06:21:40.749216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.281 [2024-07-13 06:21:40.749356] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.281 [2024-07-13 06:21:40.749382] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.281 [2024-07-13 06:21:40.749398] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.281 [2024-07-13 06:21:40.751936] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.281 [2024-07-13 06:21:40.761337] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.281 [2024-07-13 06:21:40.761650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.761789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.281 [2024-07-13 06:21:40.761818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.281 [2024-07-13 06:21:40.761836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.281 [2024-07-13 06:21:40.762012] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.281 [2024-07-13 06:21:40.762271] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.282 [2024-07-13 06:21:40.762297] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.282 [2024-07-13 06:21:40.762314] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.282 [2024-07-13 06:21:40.764727] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.282 [2024-07-13 06:21:40.774151] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.282 [2024-07-13 06:21:40.774585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.282 [2024-07-13 06:21:40.774744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.282 [2024-07-13 06:21:40.774775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.282 [2024-07-13 06:21:40.774792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.282 [2024-07-13 06:21:40.774959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.282 [2024-07-13 06:21:40.775140] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.282 [2024-07-13 06:21:40.775163] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.282 [2024-07-13 06:21:40.775195] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.282 [2024-07-13 06:21:40.777634] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.282 [2024-07-13 06:21:40.786892] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.282 [2024-07-13 06:21:40.787180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.282 [2024-07-13 06:21:40.787333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.282 [2024-07-13 06:21:40.787359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.282 [2024-07-13 06:21:40.787376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.282 [2024-07-13 06:21:40.787559] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.282 [2024-07-13 06:21:40.787771] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.282 [2024-07-13 06:21:40.787798] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.282 [2024-07-13 06:21:40.787815] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.282 [2024-07-13 06:21:40.790316] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.539 [2024-07-13 06:21:40.799912] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.539 [2024-07-13 06:21:40.800387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.539 [2024-07-13 06:21:40.800584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.539 [2024-07-13 06:21:40.800609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.539 [2024-07-13 06:21:40.800626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.539 [2024-07-13 06:21:40.800854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.539 [2024-07-13 06:21:40.801032] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.539 [2024-07-13 06:21:40.801055] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.539 [2024-07-13 06:21:40.801070] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.539 [2024-07-13 06:21:40.803263] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.539 [2024-07-13 06:21:40.812763] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.539 [2024-07-13 06:21:40.813152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.539 [2024-07-13 06:21:40.813341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.539 [2024-07-13 06:21:40.813366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.539 [2024-07-13 06:21:40.813387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.539 [2024-07-13 06:21:40.813608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.539 [2024-07-13 06:21:40.813739] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.539 [2024-07-13 06:21:40.813775] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.539 [2024-07-13 06:21:40.813792] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.539 [2024-07-13 06:21:40.816420] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.539 [2024-07-13 06:21:40.825774] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.539 [2024-07-13 06:21:40.826092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.539 [2024-07-13 06:21:40.826248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.826295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-07-13 06:21:40.826313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.540 [2024-07-13 06:21:40.826448] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.540 [2024-07-13 06:21:40.826624] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.540 [2024-07-13 06:21:40.826649] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.540 [2024-07-13 06:21:40.826666] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.540 [2024-07-13 06:21:40.829218] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.540 [2024-07-13 06:21:40.838466] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.540 [2024-07-13 06:21:40.838809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.838975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.839002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-07-13 06:21:40.839019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.540 [2024-07-13 06:21:40.839214] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.540 [2024-07-13 06:21:40.839396] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.540 [2024-07-13 06:21:40.839422] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.540 [2024-07-13 06:21:40.839439] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.540 [2024-07-13 06:21:40.841832] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.540 [2024-07-13 06:21:40.851236] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.540 [2024-07-13 06:21:40.851622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.851836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.851871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-07-13 06:21:40.851892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.540 [2024-07-13 06:21:40.852050] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.540 [2024-07-13 06:21:40.852236] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.540 [2024-07-13 06:21:40.852262] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.540 [2024-07-13 06:21:40.852278] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.540 [2024-07-13 06:21:40.854835] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.540 [2024-07-13 06:21:40.863931] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.540 [2024-07-13 06:21:40.864220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.864437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.864478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-07-13 06:21:40.864494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.540 [2024-07-13 06:21:40.864685] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.540 [2024-07-13 06:21:40.864889] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.540 [2024-07-13 06:21:40.864914] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.540 [2024-07-13 06:21:40.864931] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.540 [2024-07-13 06:21:40.867324] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.540 [2024-07-13 06:21:40.876780] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.540 [2024-07-13 06:21:40.877131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.877314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.877360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-07-13 06:21:40.877378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.540 [2024-07-13 06:21:40.877554] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.540 [2024-07-13 06:21:40.877734] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.540 [2024-07-13 06:21:40.877760] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.540 [2024-07-13 06:21:40.877776] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.540 [2024-07-13 06:21:40.880188] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.540 [2024-07-13 06:21:40.889450] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.540 [2024-07-13 06:21:40.889882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.890093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.890122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-07-13 06:21:40.890140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.540 [2024-07-13 06:21:40.890344] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.540 [2024-07-13 06:21:40.890553] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.540 [2024-07-13 06:21:40.890579] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.540 [2024-07-13 06:21:40.890595] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.540 [2024-07-13 06:21:40.893089] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.540 [2024-07-13 06:21:40.902251] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.540 [2024-07-13 06:21:40.902637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.902801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.902826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-07-13 06:21:40.902843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.540 [2024-07-13 06:21:40.903010] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.540 [2024-07-13 06:21:40.903221] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.540 [2024-07-13 06:21:40.903248] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.540 [2024-07-13 06:21:40.903264] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.540 [2024-07-13 06:21:40.905747] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.540 [2024-07-13 06:21:40.915306] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.540 [2024-07-13 06:21:40.915686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.915855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.915893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-07-13 06:21:40.915912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.540 [2024-07-13 06:21:40.916074] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.540 [2024-07-13 06:21:40.916233] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.540 [2024-07-13 06:21:40.916259] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.540 [2024-07-13 06:21:40.916275] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.540 [2024-07-13 06:21:40.918770] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.540 [2024-07-13 06:21:40.928042] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.540 [2024-07-13 06:21:40.928465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.928679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.928719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-07-13 06:21:40.928735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.540 [2024-07-13 06:21:40.928917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.540 [2024-07-13 06:21:40.929080] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.540 [2024-07-13 06:21:40.929105] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.540 [2024-07-13 06:21:40.929127] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.540 [2024-07-13 06:21:40.931597] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.540 [2024-07-13 06:21:40.941018] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.540 [2024-07-13 06:21:40.941405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.941637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.941663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.540 [2024-07-13 06:21:40.941694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.540 [2024-07-13 06:21:40.941854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.540 [2024-07-13 06:21:40.942087] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.540 [2024-07-13 06:21:40.942113] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.540 [2024-07-13 06:21:40.942130] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.540 [2024-07-13 06:21:40.944678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.540 [2024-07-13 06:21:40.953699] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.540 [2024-07-13 06:21:40.954098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.540 [2024-07-13 06:21:40.954351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-07-13 06:21:40.954397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.541 [2024-07-13 06:21:40.954415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.541 [2024-07-13 06:21:40.954619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.541 [2024-07-13 06:21:40.954782] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.541 [2024-07-13 06:21:40.954806] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.541 [2024-07-13 06:21:40.954823] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.541 [2024-07-13 06:21:40.957350] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.541 [2024-07-13 06:21:40.966493] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.541 [2024-07-13 06:21:40.966903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-07-13 06:21:40.967073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-07-13 06:21:40.967102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.541 [2024-07-13 06:21:40.967120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.541 [2024-07-13 06:21:40.967319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.541 [2024-07-13 06:21:40.967532] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.541 [2024-07-13 06:21:40.967557] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.541 [2024-07-13 06:21:40.967574] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.541 [2024-07-13 06:21:40.969958] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.541 [2024-07-13 06:21:40.979355] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.541 [2024-07-13 06:21:40.979696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-07-13 06:21:40.979917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-07-13 06:21:40.979947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.541 [2024-07-13 06:21:40.979966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.541 [2024-07-13 06:21:40.980131] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.541 [2024-07-13 06:21:40.980290] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.541 [2024-07-13 06:21:40.980315] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.541 [2024-07-13 06:21:40.980332] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.541 [2024-07-13 06:21:40.982727] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.541 [2024-07-13 06:21:40.992177] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.541 [2024-07-13 06:21:40.992563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-07-13 06:21:40.992777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-07-13 06:21:40.992806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.541 [2024-07-13 06:21:40.992824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.541 [2024-07-13 06:21:40.993062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.541 [2024-07-13 06:21:40.993244] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.541 [2024-07-13 06:21:40.993270] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.541 [2024-07-13 06:21:40.993286] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.541 [2024-07-13 06:21:40.995837] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.541 [2024-07-13 06:21:41.004668] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.541 [2024-07-13 06:21:41.004992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-07-13 06:21:41.005175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-07-13 06:21:41.005221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.541 [2024-07-13 06:21:41.005239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.541 [2024-07-13 06:21:41.005438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.541 [2024-07-13 06:21:41.005619] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.541 [2024-07-13 06:21:41.005645] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.541 [2024-07-13 06:21:41.005662] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.541 [2024-07-13 06:21:41.008191] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.541 [2024-07-13 06:21:41.017552] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.541 [2024-07-13 06:21:41.017881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-07-13 06:21:41.018073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-07-13 06:21:41.018102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.541 [2024-07-13 06:21:41.018120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.541 [2024-07-13 06:21:41.018282] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.541 [2024-07-13 06:21:41.018418] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.541 [2024-07-13 06:21:41.018443] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.541 [2024-07-13 06:21:41.018460] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.541 [2024-07-13 06:21:41.020881] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.541 [2024-07-13 06:21:41.030466] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.541 [2024-07-13 06:21:41.030804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-07-13 06:21:41.031028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-07-13 06:21:41.031058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.541 [2024-07-13 06:21:41.031076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.541 [2024-07-13 06:21:41.031206] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.541 [2024-07-13 06:21:41.031364] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.541 [2024-07-13 06:21:41.031390] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.541 [2024-07-13 06:21:41.031407] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.541 [2024-07-13 06:21:41.033854] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.541 [2024-07-13 06:21:41.043044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.541 [2024-07-13 06:21:41.043391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-07-13 06:21:41.043628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.541 [2024-07-13 06:21:41.043676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.541 [2024-07-13 06:21:41.043694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.541 [2024-07-13 06:21:41.043881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.541 [2024-07-13 06:21:41.044041] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.541 [2024-07-13 06:21:41.044065] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.541 [2024-07-13 06:21:41.044080] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.541 [2024-07-13 06:21:41.046678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.799 [2024-07-13 06:21:41.055884] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.799 [2024-07-13 06:21:41.056404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.799 [2024-07-13 06:21:41.056693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.799 [2024-07-13 06:21:41.056734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.799 [2024-07-13 06:21:41.056751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.799 [2024-07-13 06:21:41.056975] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.799 [2024-07-13 06:21:41.057135] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.799 [2024-07-13 06:21:41.057162] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.799 [2024-07-13 06:21:41.057178] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.799 [2024-07-13 06:21:41.059769] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.799 [2024-07-13 06:21:41.068919] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.799 [2024-07-13 06:21:41.069384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.799 [2024-07-13 06:21:41.069521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.799 [2024-07-13 06:21:41.069565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.799 [2024-07-13 06:21:41.069583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.799 [2024-07-13 06:21:41.069718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.799 [2024-07-13 06:21:41.069880] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.799 [2024-07-13 06:21:41.069906] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.799 [2024-07-13 06:21:41.069922] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.799 [2024-07-13 06:21:41.072357] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.799 [2024-07-13 06:21:41.081640] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.799 [2024-07-13 06:21:41.082058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.799 [2024-07-13 06:21:41.082250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.799 [2024-07-13 06:21:41.082276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.799 [2024-07-13 06:21:41.082292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.799 [2024-07-13 06:21:41.082537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.799 [2024-07-13 06:21:41.082737] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.799 [2024-07-13 06:21:41.082762] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.799 [2024-07-13 06:21:41.082779] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.799 [2024-07-13 06:21:41.085242] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.799 [2024-07-13 06:21:41.094400] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.799 [2024-07-13 06:21:41.094739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.799 [2024-07-13 06:21:41.094936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.799 [2024-07-13 06:21:41.094966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.799 [2024-07-13 06:21:41.094984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.799 [2024-07-13 06:21:41.095178] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.799 [2024-07-13 06:21:41.095377] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.799 [2024-07-13 06:21:41.095402] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.799 [2024-07-13 06:21:41.095419] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.799 [2024-07-13 06:21:41.097964] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.799 [2024-07-13 06:21:41.107424] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.799 [2024-07-13 06:21:41.107919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.799 [2024-07-13 06:21:41.108152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.799 [2024-07-13 06:21:41.108205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.799 [2024-07-13 06:21:41.108224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.799 [2024-07-13 06:21:41.108394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.799 [2024-07-13 06:21:41.108557] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.799 [2024-07-13 06:21:41.108583] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.799 [2024-07-13 06:21:41.108599] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.799 [2024-07-13 06:21:41.111031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.799 [2024-07-13 06:21:41.119990] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.799 [2024-07-13 06:21:41.120369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.799 [2024-07-13 06:21:41.120591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.799 [2024-07-13 06:21:41.120637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.799 [2024-07-13 06:21:41.120656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.799 [2024-07-13 06:21:41.120814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.799 [2024-07-13 06:21:41.121016] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.799 [2024-07-13 06:21:41.121043] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.799 [2024-07-13 06:21:41.121059] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.799 [2024-07-13 06:21:41.123561] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.800 [2024-07-13 06:21:41.132522] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.800 [2024-07-13 06:21:41.132959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.133161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.133190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.800 [2024-07-13 06:21:41.133213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.800 [2024-07-13 06:21:41.133348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.800 [2024-07-13 06:21:41.133534] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.800 [2024-07-13 06:21:41.133560] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.800 [2024-07-13 06:21:41.133577] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.800 [2024-07-13 06:21:41.136062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.800 [2024-07-13 06:21:41.145169] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.800 [2024-07-13 06:21:41.145590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.145755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.145781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.800 [2024-07-13 06:21:41.145814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.800 [2024-07-13 06:21:41.146020] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.800 [2024-07-13 06:21:41.146224] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.800 [2024-07-13 06:21:41.146250] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.800 [2024-07-13 06:21:41.146266] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.800 [2024-07-13 06:21:41.148723] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.800 [2024-07-13 06:21:41.158001] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.800 [2024-07-13 06:21:41.158381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.158580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.158627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.800 [2024-07-13 06:21:41.158645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.800 [2024-07-13 06:21:41.158857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.800 [2024-07-13 06:21:41.159030] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.800 [2024-07-13 06:21:41.159056] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.800 [2024-07-13 06:21:41.159072] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.800 [2024-07-13 06:21:41.161769] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.800 [2024-07-13 06:21:41.170469] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.800 [2024-07-13 06:21:41.170823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.171039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.171066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.800 [2024-07-13 06:21:41.171097] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.800 [2024-07-13 06:21:41.171261] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.800 [2024-07-13 06:21:41.171447] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.800 [2024-07-13 06:21:41.171473] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.800 [2024-07-13 06:21:41.171489] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.800 [2024-07-13 06:21:41.174028] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.800 [2024-07-13 06:21:41.183107] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.800 [2024-07-13 06:21:41.183496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.183684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.183713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.800 [2024-07-13 06:21:41.183731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.800 [2024-07-13 06:21:41.183914] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.800 [2024-07-13 06:21:41.184091] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.800 [2024-07-13 06:21:41.184117] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.800 [2024-07-13 06:21:41.184133] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.800 [2024-07-13 06:21:41.186582] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.800 [2024-07-13 06:21:41.195783] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.800 [2024-07-13 06:21:41.196181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.196403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.196432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.800 [2024-07-13 06:21:41.196450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.800 [2024-07-13 06:21:41.196654] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.800 [2024-07-13 06:21:41.196892] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.800 [2024-07-13 06:21:41.196919] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.800 [2024-07-13 06:21:41.196935] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.800 [2024-07-13 06:21:41.199477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.800 [2024-07-13 06:21:41.208549] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.800 [2024-07-13 06:21:41.208934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.209097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.209126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.800 [2024-07-13 06:21:41.209144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.800 [2024-07-13 06:21:41.209379] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.800 [2024-07-13 06:21:41.209588] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.800 [2024-07-13 06:21:41.209615] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.800 [2024-07-13 06:21:41.209631] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.800 [2024-07-13 06:21:41.212123] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.800 [2024-07-13 06:21:41.221385] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.800 [2024-07-13 06:21:41.221698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.221878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.221909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.800 [2024-07-13 06:21:41.221927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.800 [2024-07-13 06:21:41.222067] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.800 [2024-07-13 06:21:41.222235] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.800 [2024-07-13 06:21:41.222260] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.800 [2024-07-13 06:21:41.222277] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.800 [2024-07-13 06:21:41.224761] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.800 [2024-07-13 06:21:41.234132] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.800 [2024-07-13 06:21:41.234477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.234632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.234679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.800 [2024-07-13 06:21:41.234698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.800 [2024-07-13 06:21:41.234828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.800 [2024-07-13 06:21:41.234982] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.800 [2024-07-13 06:21:41.235008] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.800 [2024-07-13 06:21:41.235025] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.800 [2024-07-13 06:21:41.237546] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.800 [2024-07-13 06:21:41.247097] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.800 [2024-07-13 06:21:41.247471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.247630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.800 [2024-07-13 06:21:41.247656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.800 [2024-07-13 06:21:41.247688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.800 [2024-07-13 06:21:41.247949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.800 [2024-07-13 06:21:41.248141] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.800 [2024-07-13 06:21:41.248172] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.800 [2024-07-13 06:21:41.248189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.801 [2024-07-13 06:21:41.250665] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.801 [2024-07-13 06:21:41.259877] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.801 [2024-07-13 06:21:41.260291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.801 [2024-07-13 06:21:41.260476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.801 [2024-07-13 06:21:41.260519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.801 [2024-07-13 06:21:41.260537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.801 [2024-07-13 06:21:41.260754] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.801 [2024-07-13 06:21:41.260956] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.801 [2024-07-13 06:21:41.260983] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.801 [2024-07-13 06:21:41.261000] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.801 [2024-07-13 06:21:41.263274] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.801 [2024-07-13 06:21:41.272612] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.801 [2024-07-13 06:21:41.273005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.801 [2024-07-13 06:21:41.273206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.801 [2024-07-13 06:21:41.273232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.801 [2024-07-13 06:21:41.273249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.801 [2024-07-13 06:21:41.273455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.801 [2024-07-13 06:21:41.273591] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.801 [2024-07-13 06:21:41.273617] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.801 [2024-07-13 06:21:41.273633] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.801 [2024-07-13 06:21:41.276264] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.801 [2024-07-13 06:21:41.285309] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.801 [2024-07-13 06:21:41.285674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.801 [2024-07-13 06:21:41.285803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.801 [2024-07-13 06:21:41.285831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.801 [2024-07-13 06:21:41.285848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.801 [2024-07-13 06:21:41.286043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.801 [2024-07-13 06:21:41.286248] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.801 [2024-07-13 06:21:41.286274] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.801 [2024-07-13 06:21:41.286295] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.801 [2024-07-13 06:21:41.288645] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:34.801 [2024-07-13 06:21:41.298175] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:34.801 [2024-07-13 06:21:41.298551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.801 [2024-07-13 06:21:41.298770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.801 [2024-07-13 06:21:41.298795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:34.801 [2024-07-13 06:21:41.298811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:34.801 [2024-07-13 06:21:41.299032] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:34.801 [2024-07-13 06:21:41.299151] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:34.801 [2024-07-13 06:21:41.299176] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:34.801 [2024-07-13 06:21:41.299192] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:34.801 [2024-07-13 06:21:41.301813] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.060 [2024-07-13 06:21:41.311086] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.060 [2024-07-13 06:21:41.311525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.060 [2024-07-13 06:21:41.311813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.060 [2024-07-13 06:21:41.311860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.060 [2024-07-13 06:21:41.311891] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.060 [2024-07-13 06:21:41.312063] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.060 [2024-07-13 06:21:41.312253] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.060 [2024-07-13 06:21:41.312278] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.060 [2024-07-13 06:21:41.312295] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.060 [2024-07-13 06:21:41.314830] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.060 [2024-07-13 06:21:41.323765] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.060 [2024-07-13 06:21:41.324217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.060 [2024-07-13 06:21:41.324425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.060 [2024-07-13 06:21:41.324513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.060 [2024-07-13 06:21:41.324532] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.060 [2024-07-13 06:21:41.324694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.060 [2024-07-13 06:21:41.324937] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.060 [2024-07-13 06:21:41.324963] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.060 [2024-07-13 06:21:41.324980] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.060 [2024-07-13 06:21:41.327582] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.060 [2024-07-13 06:21:41.336641] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.060 [2024-07-13 06:21:41.337028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.060 [2024-07-13 06:21:41.337193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.060 [2024-07-13 06:21:41.337223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.060 [2024-07-13 06:21:41.337241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.060 [2024-07-13 06:21:41.337405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.060 [2024-07-13 06:21:41.337576] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.060 [2024-07-13 06:21:41.337603] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.060 [2024-07-13 06:21:41.337619] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.060 [2024-07-13 06:21:41.340144] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.060 [2024-07-13 06:21:41.349509] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.060 [2024-07-13 06:21:41.349883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.060 [2024-07-13 06:21:41.350077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.060 [2024-07-13 06:21:41.350108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.060 [2024-07-13 06:21:41.350127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.060 [2024-07-13 06:21:41.350322] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.060 [2024-07-13 06:21:41.350516] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.060 [2024-07-13 06:21:41.350543] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.060 [2024-07-13 06:21:41.350559] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.060 [2024-07-13 06:21:41.353088] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.060 [2024-07-13 06:21:41.362253] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.060 [2024-07-13 06:21:41.362616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.060 [2024-07-13 06:21:41.362794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.060 [2024-07-13 06:21:41.362822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.060 [2024-07-13 06:21:41.362839] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.060 [2024-07-13 06:21:41.363005] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.060 [2024-07-13 06:21:41.363170] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.060 [2024-07-13 06:21:41.363196] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.060 [2024-07-13 06:21:41.363213] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.060 [2024-07-13 06:21:41.365741] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.060 [2024-07-13 06:21:41.374962] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.060 [2024-07-13 06:21:41.375284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.060 [2024-07-13 06:21:41.375482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.060 [2024-07-13 06:21:41.375512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.060 [2024-07-13 06:21:41.375530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.060 [2024-07-13 06:21:41.375666] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.060 [2024-07-13 06:21:41.375878] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.060 [2024-07-13 06:21:41.375905] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.060 [2024-07-13 06:21:41.375922] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.060 [2024-07-13 06:21:41.378327] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.060 [2024-07-13 06:21:41.387822] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.060 [2024-07-13 06:21:41.388194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.060 [2024-07-13 06:21:41.388443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.060 [2024-07-13 06:21:41.388496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.060 [2024-07-13 06:21:41.388515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.060 [2024-07-13 06:21:41.388720] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.060 [2024-07-13 06:21:41.388928] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.060 [2024-07-13 06:21:41.388955] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.060 [2024-07-13 06:21:41.388972] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.060 [2024-07-13 06:21:41.391650] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.060 [2024-07-13 06:21:41.400475] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.060 [2024-07-13 06:21:41.400935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.060 [2024-07-13 06:21:41.401107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.060 [2024-07-13 06:21:41.401136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.060 [2024-07-13 06:21:41.401154] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.060 [2024-07-13 06:21:41.401288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.060 [2024-07-13 06:21:41.401447] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.060 [2024-07-13 06:21:41.401472] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.061 [2024-07-13 06:21:41.401489] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.061 [2024-07-13 06:21:41.403680] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.061 [2024-07-13 06:21:41.413204] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.061 [2024-07-13 06:21:41.413623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.413799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.413828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.061 [2024-07-13 06:21:41.413847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.061 [2024-07-13 06:21:41.414025] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.061 [2024-07-13 06:21:41.414188] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.061 [2024-07-13 06:21:41.414215] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.061 [2024-07-13 06:21:41.414232] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.061 [2024-07-13 06:21:41.416767] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.061 [2024-07-13 06:21:41.426171] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.061 [2024-07-13 06:21:41.426526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.426682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.426730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.061 [2024-07-13 06:21:41.426749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.061 [2024-07-13 06:21:41.426920] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.061 [2024-07-13 06:21:41.427135] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.061 [2024-07-13 06:21:41.427162] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.061 [2024-07-13 06:21:41.427178] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.061 [2024-07-13 06:21:41.429744] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.061 [2024-07-13 06:21:41.438907] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.061 [2024-07-13 06:21:41.439418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.439752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.439810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.061 [2024-07-13 06:21:41.439828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.061 [2024-07-13 06:21:41.440063] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.061 [2024-07-13 06:21:41.440226] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.061 [2024-07-13 06:21:41.440251] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.061 [2024-07-13 06:21:41.440267] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.061 [2024-07-13 06:21:41.442772] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.061 [2024-07-13 06:21:41.451410] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.061 [2024-07-13 06:21:41.451768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.451995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.452026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.061 [2024-07-13 06:21:41.452044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.061 [2024-07-13 06:21:41.452257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.061 [2024-07-13 06:21:41.452439] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.061 [2024-07-13 06:21:41.452467] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.061 [2024-07-13 06:21:41.452485] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.061 [2024-07-13 06:21:41.455249] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.061 [2024-07-13 06:21:41.463930] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.061 [2024-07-13 06:21:41.464396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.464606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.464651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.061 [2024-07-13 06:21:41.464669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.061 [2024-07-13 06:21:41.464886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.061 [2024-07-13 06:21:41.465062] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.061 [2024-07-13 06:21:41.465089] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.061 [2024-07-13 06:21:41.465105] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.061 [2024-07-13 06:21:41.467844] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.061 [2024-07-13 06:21:41.476583] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.061 [2024-07-13 06:21:41.476920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.477091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.477122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.061 [2024-07-13 06:21:41.477140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.061 [2024-07-13 06:21:41.477318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.061 [2024-07-13 06:21:41.477504] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.061 [2024-07-13 06:21:41.477531] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.061 [2024-07-13 06:21:41.477547] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.061 [2024-07-13 06:21:41.480072] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.061 [2024-07-13 06:21:41.489533] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.061 [2024-07-13 06:21:41.489912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.490175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.490222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.061 [2024-07-13 06:21:41.490246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.061 [2024-07-13 06:21:41.490383] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.061 [2024-07-13 06:21:41.490505] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.061 [2024-07-13 06:21:41.490530] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.061 [2024-07-13 06:21:41.490546] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.061 [2024-07-13 06:21:41.493005] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.061 [2024-07-13 06:21:41.502311] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.061 [2024-07-13 06:21:41.502718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.502896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.502928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.061 [2024-07-13 06:21:41.502947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.061 [2024-07-13 06:21:41.503114] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.061 [2024-07-13 06:21:41.503277] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.061 [2024-07-13 06:21:41.503303] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.061 [2024-07-13 06:21:41.503320] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.061 [2024-07-13 06:21:41.505750] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.061 [2024-07-13 06:21:41.515136] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.061 [2024-07-13 06:21:41.515575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.515763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.515791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.061 [2024-07-13 06:21:41.515809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.061 [2024-07-13 06:21:41.515970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.061 [2024-07-13 06:21:41.516115] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.061 [2024-07-13 06:21:41.516140] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.061 [2024-07-13 06:21:41.516157] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.061 [2024-07-13 06:21:41.518661] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.061 [2024-07-13 06:21:41.527840] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.061 [2024-07-13 06:21:41.528254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.528422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.061 [2024-07-13 06:21:41.528450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.061 [2024-07-13 06:21:41.528468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.061 [2024-07-13 06:21:41.528650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.061 [2024-07-13 06:21:41.528813] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.061 [2024-07-13 06:21:41.528839] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.061 [2024-07-13 06:21:41.528856] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.062 [2024-07-13 06:21:41.531347] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.062 [2024-07-13 06:21:41.540442] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.062 [2024-07-13 06:21:41.540823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.062 [2024-07-13 06:21:41.541068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.062 [2024-07-13 06:21:41.541112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.062 [2024-07-13 06:21:41.541131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.062 [2024-07-13 06:21:41.541313] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.062 [2024-07-13 06:21:41.541494] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.062 [2024-07-13 06:21:41.541521] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.062 [2024-07-13 06:21:41.541538] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.062 [2024-07-13 06:21:41.544038] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.062 [2024-07-13 06:21:41.553233] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.062 [2024-07-13 06:21:41.553713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.062 [2024-07-13 06:21:41.553913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.062 [2024-07-13 06:21:41.553943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.062 [2024-07-13 06:21:41.553960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.062 [2024-07-13 06:21:41.554096] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.062 [2024-07-13 06:21:41.554217] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.062 [2024-07-13 06:21:41.554243] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.062 [2024-07-13 06:21:41.554259] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.062 [2024-07-13 06:21:41.556650] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.062 [2024-07-13 06:21:41.565883] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.062 [2024-07-13 06:21:41.566228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.062 [2024-07-13 06:21:41.566408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.062 [2024-07-13 06:21:41.566436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.062 [2024-07-13 06:21:41.566452] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.062 [2024-07-13 06:21:41.566648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.062 [2024-07-13 06:21:41.566809] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.062 [2024-07-13 06:21:41.566836] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.062 [2024-07-13 06:21:41.566852] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.062 [2024-07-13 06:21:41.569280] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.321 [2024-07-13 06:21:41.578760] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.321 [2024-07-13 06:21:41.579176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.321 [2024-07-13 06:21:41.579430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.321 [2024-07-13 06:21:41.579472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.321 [2024-07-13 06:21:41.579488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.321 [2024-07-13 06:21:41.579679] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.321 [2024-07-13 06:21:41.579919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.321 [2024-07-13 06:21:41.579946] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.321 [2024-07-13 06:21:41.579963] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.321 [2024-07-13 06:21:41.582387] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.321 [2024-07-13 06:21:41.591730] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.321 [2024-07-13 06:21:41.592097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.321 [2024-07-13 06:21:41.592317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.321 [2024-07-13 06:21:41.592345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.321 [2024-07-13 06:21:41.592364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.321 [2024-07-13 06:21:41.592544] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.321 [2024-07-13 06:21:41.592743] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.321 [2024-07-13 06:21:41.592770] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.321 [2024-07-13 06:21:41.592786] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.321 [2024-07-13 06:21:41.595373] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.321 [2024-07-13 06:21:41.604497] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.321 [2024-07-13 06:21:41.604968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.321 [2024-07-13 06:21:41.605225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.321 [2024-07-13 06:21:41.605279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.321 [2024-07-13 06:21:41.605297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.321 [2024-07-13 06:21:41.605474] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.321 [2024-07-13 06:21:41.605709] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.321 [2024-07-13 06:21:41.605741] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.321 [2024-07-13 06:21:41.605759] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.321 [2024-07-13 06:21:41.607919] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.321 [2024-07-13 06:21:41.617177] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.321 [2024-07-13 06:21:41.617563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.321 [2024-07-13 06:21:41.617718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.321 [2024-07-13 06:21:41.617747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.321 [2024-07-13 06:21:41.617764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.321 [2024-07-13 06:21:41.617960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.321 [2024-07-13 06:21:41.618164] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.321 [2024-07-13 06:21:41.618191] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.321 [2024-07-13 06:21:41.618208] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.321 [2024-07-13 06:21:41.620832] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.321 [2024-07-13 06:21:41.630020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.321 [2024-07-13 06:21:41.630398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.321 [2024-07-13 06:21:41.630596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.321 [2024-07-13 06:21:41.630624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.321 [2024-07-13 06:21:41.630642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.321 [2024-07-13 06:21:41.630836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.321 [2024-07-13 06:21:41.631088] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.321 [2024-07-13 06:21:41.631115] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.321 [2024-07-13 06:21:41.631132] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.321 [2024-07-13 06:21:41.633536] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.321 [2024-07-13 06:21:41.642505] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.321 [2024-07-13 06:21:41.642887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.321 [2024-07-13 06:21:41.643091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.321 [2024-07-13 06:21:41.643120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.322 [2024-07-13 06:21:41.643138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.322 [2024-07-13 06:21:41.643343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.322 [2024-07-13 06:21:41.643500] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.322 [2024-07-13 06:21:41.643526] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.322 [2024-07-13 06:21:41.643549] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.322 [2024-07-13 06:21:41.645969] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.322 [2024-07-13 06:21:41.655239] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.322 [2024-07-13 06:21:41.655606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.322 [2024-07-13 06:21:41.655770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.322 [2024-07-13 06:21:41.655799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.322 [2024-07-13 06:21:41.655816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.322 [2024-07-13 06:21:41.656009] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.322 [2024-07-13 06:21:41.656163] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.322 [2024-07-13 06:21:41.656190] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.322 [2024-07-13 06:21:41.656207] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.322 [2024-07-13 06:21:41.658689] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.322 [2024-07-13 06:21:41.668009] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.322 [2024-07-13 06:21:41.668309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.322 [2024-07-13 06:21:41.668571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.322 [2024-07-13 06:21:41.668597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.322 [2024-07-13 06:21:41.668613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.322 [2024-07-13 06:21:41.668831] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.322 [2024-07-13 06:21:41.669005] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.322 [2024-07-13 06:21:41.669029] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.322 [2024-07-13 06:21:41.669043] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.322 [2024-07-13 06:21:41.671531] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.322 [2024-07-13 06:21:41.680757] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.322 [2024-07-13 06:21:41.681081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.322 [2024-07-13 06:21:41.681209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.322 [2024-07-13 06:21:41.681235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.322 [2024-07-13 06:21:41.681268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.322 [2024-07-13 06:21:41.681502] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.322 [2024-07-13 06:21:41.681666] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.322 [2024-07-13 06:21:41.681692] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.322 [2024-07-13 06:21:41.681709] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.322 [2024-07-13 06:21:41.684077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.322 [2024-07-13 06:21:41.693695] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.322 [2024-07-13 06:21:41.694017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.322 [2024-07-13 06:21:41.694194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.322 [2024-07-13 06:21:41.694224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.322 [2024-07-13 06:21:41.694242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.322 [2024-07-13 06:21:41.694423] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.322 [2024-07-13 06:21:41.694622] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.322 [2024-07-13 06:21:41.694648] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.322 [2024-07-13 06:21:41.694665] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.322 [2024-07-13 06:21:41.697121] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.322 [2024-07-13 06:21:41.706477] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.322 [2024-07-13 06:21:41.706815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.322 [2024-07-13 06:21:41.707011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.322 [2024-07-13 06:21:41.707042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.322 [2024-07-13 06:21:41.707060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.322 [2024-07-13 06:21:41.707195] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.322 [2024-07-13 06:21:41.707376] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.322 [2024-07-13 06:21:41.707402] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.322 [2024-07-13 06:21:41.707419] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.322 [2024-07-13 06:21:41.709842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.322 [2024-07-13 06:21:41.719391] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.322 [2024-07-13 06:21:41.719741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.322 [2024-07-13 06:21:41.719897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.322 [2024-07-13 06:21:41.719925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.322 [2024-07-13 06:21:41.719942] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.322 [2024-07-13 06:21:41.720123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.322 [2024-07-13 06:21:41.720310] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.322 [2024-07-13 06:21:41.720336] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.322 [2024-07-13 06:21:41.720352] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.322 [2024-07-13 06:21:41.722671] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.322 [2024-07-13 06:21:41.732395] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.322 [2024-07-13 06:21:41.732711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.322 [2024-07-13 06:21:41.732922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.322 [2024-07-13 06:21:41.732961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.322 [2024-07-13 06:21:41.732985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.322 [2024-07-13 06:21:41.733236] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.322 [2024-07-13 06:21:41.733414] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.322 [2024-07-13 06:21:41.733441] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.322 [2024-07-13 06:21:41.733458] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.322 [2024-07-13 06:21:41.735878] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.322 [2024-07-13 06:21:41.745026] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.322 [2024-07-13 06:21:41.745353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.322 [2024-07-13 06:21:41.745510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.322 [2024-07-13 06:21:41.745538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.322 [2024-07-13 06:21:41.745554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.322 [2024-07-13 06:21:41.745778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.322 [2024-07-13 06:21:41.745946] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.323 [2024-07-13 06:21:41.745973] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.323 [2024-07-13 06:21:41.745989] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.323 [2024-07-13 06:21:41.748414] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1231799 Killed "${NVMF_APP[@]}" "$@" 00:26:35.323 06:21:41 -- host/bdevperf.sh@36 -- # tgt_init 00:26:35.323 06:21:41 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:35.323 06:21:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:35.323 06:21:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:35.323 06:21:41 -- common/autotest_common.sh@10 -- # set +x 00:26:35.323 [2024-07-13 06:21:41.757709] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.323 [2024-07-13 06:21:41.758071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.323 [2024-07-13 06:21:41.758217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.323 [2024-07-13 06:21:41.758243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.323 [2024-07-13 06:21:41.758259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.323 [2024-07-13 06:21:41.758439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.323 [2024-07-13 06:21:41.758620] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.323 [2024-07-13 06:21:41.758646] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.323 [2024-07-13 06:21:41.758663] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.323 [2024-07-13 06:21:41.761201] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.323 06:21:41 -- nvmf/common.sh@469 -- # nvmfpid=1232795 00:26:35.323 06:21:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:35.323 06:21:41 -- nvmf/common.sh@470 -- # waitforlisten 1232795 00:26:35.323 06:21:41 -- common/autotest_common.sh@819 -- # '[' -z 1232795 ']' 00:26:35.323 06:21:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.323 06:21:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:35.323 06:21:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.323 06:21:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:35.323 06:21:41 -- common/autotest_common.sh@10 -- # set +x 00:26:35.323 [2024-07-13 06:21:41.770497] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.323 [2024-07-13 06:21:41.770851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.323 [2024-07-13 06:21:41.770999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.323 [2024-07-13 06:21:41.771025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.323 [2024-07-13 06:21:41.771041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.323 [2024-07-13 06:21:41.771199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.323 [2024-07-13 06:21:41.771380] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.323 [2024-07-13 06:21:41.771405] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.323 [2024-07-13 06:21:41.771421] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.323 [2024-07-13 06:21:41.773767] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.323 [2024-07-13 06:21:41.783109] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.323 [2024-07-13 06:21:41.783416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.323 [2024-07-13 06:21:41.783596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.323 [2024-07-13 06:21:41.783623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.323 [2024-07-13 06:21:41.783640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.323 [2024-07-13 06:21:41.783760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.323 [2024-07-13 06:21:41.783968] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.323 [2024-07-13 06:21:41.783991] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.323 [2024-07-13 06:21:41.784006] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.323 [2024-07-13 06:21:41.786248] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.323 [2024-07-13 06:21:41.795285] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.323 [2024-07-13 06:21:41.795705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.323 [2024-07-13 06:21:41.795831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.323 [2024-07-13 06:21:41.795857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.323 [2024-07-13 06:21:41.795894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.323 [2024-07-13 06:21:41.796059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.323 [2024-07-13 06:21:41.796299] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.323 [2024-07-13 06:21:41.796319] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.323 [2024-07-13 06:21:41.796332] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.323 [2024-07-13 06:21:41.798341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.323 [2024-07-13 06:21:41.801694] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:35.323 [2024-07-13 06:21:41.801763] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.323 [2024-07-13 06:21:41.807494] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.323 [2024-07-13 06:21:41.807821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.323 [2024-07-13 06:21:41.807958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.323 [2024-07-13 06:21:41.807984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.323 [2024-07-13 06:21:41.808000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.323 [2024-07-13 06:21:41.808125] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.323 [2024-07-13 06:21:41.808290] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.323 [2024-07-13 06:21:41.808310] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.323 [2024-07-13 06:21:41.808323] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.323 [2024-07-13 06:21:41.810439] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.323 [2024-07-13 06:21:41.819649] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.323 [2024-07-13 06:21:41.820011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.323 [2024-07-13 06:21:41.820174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.323 [2024-07-13 06:21:41.820200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.323 [2024-07-13 06:21:41.820216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.323 [2024-07-13 06:21:41.820393] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.323 [2024-07-13 06:21:41.820539] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.323 [2024-07-13 06:21:41.820559] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.323 [2024-07-13 06:21:41.820572] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.323 [2024-07-13 06:21:41.822910] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.582 [2024-07-13 06:21:41.832365] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.582 [2024-07-13 06:21:41.832691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.582 [2024-07-13 06:21:41.832895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.582 [2024-07-13 06:21:41.832923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.582 [2024-07-13 06:21:41.832939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.582 [2024-07-13 06:21:41.833072] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.582 [2024-07-13 06:21:41.833270] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.583 [2024-07-13 06:21:41.833292] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.583 [2024-07-13 06:21:41.833305] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.583 [2024-07-13 06:21:41.835431] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.583 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.583 [2024-07-13 06:21:41.845323] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.583 [2024-07-13 06:21:41.845664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.583 [2024-07-13 06:21:41.845876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.583 [2024-07-13 06:21:41.845903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.583 [2024-07-13 06:21:41.845919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.583 [2024-07-13 06:21:41.846122] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.583 [2024-07-13 06:21:41.846282] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.583 [2024-07-13 06:21:41.846302] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.583 [2024-07-13 06:21:41.846315] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.583 [2024-07-13 06:21:41.848527] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.583 [2024-07-13 06:21:41.858127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.583 [2024-07-13 06:21:41.858527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.583 [2024-07-13 06:21:41.858706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.583 [2024-07-13 06:21:41.858732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.583 [2024-07-13 06:21:41.858748] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.583 [2024-07-13 06:21:41.858990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.583 [2024-07-13 06:21:41.859138] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.583 [2024-07-13 06:21:41.859160] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.583 [2024-07-13 06:21:41.859174] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.583 [2024-07-13 06:21:41.861862] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.583 [2024-07-13 06:21:41.871037] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.583 [2024-07-13 06:21:41.871415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.583 [2024-07-13 06:21:41.871575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.583 [2024-07-13 06:21:41.871602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.583 [2024-07-13 06:21:41.871624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.583 [2024-07-13 06:21:41.871829] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.583 [2024-07-13 06:21:41.871972] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.583 [2024-07-13 06:21:41.871996] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.583 [2024-07-13 06:21:41.872010] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.583 [2024-07-13 06:21:41.874314] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.583 [2024-07-13 06:21:41.874750] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:35.583 [2024-07-13 06:21:41.883581] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.583 [2024-07-13 06:21:41.884112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.583 [2024-07-13 06:21:41.884291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.583 [2024-07-13 06:21:41.884328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.583 [2024-07-13 06:21:41.884346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.583 [2024-07-13 06:21:41.884566] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.583 [2024-07-13 06:21:41.884740] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.583 [2024-07-13 06:21:41.884761] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.583 [2024-07-13 06:21:41.884776] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.583 [2024-07-13 06:21:41.887477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.583 [2024-07-13 06:21:41.896226] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.583 [2024-07-13 06:21:41.896663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.583 [2024-07-13 06:21:41.896877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.583 [2024-07-13 06:21:41.896904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.583 [2024-07-13 06:21:41.896922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.583 [2024-07-13 06:21:41.897097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.583 [2024-07-13 06:21:41.897264] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.583 [2024-07-13 06:21:41.897285] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.583 [2024-07-13 06:21:41.897299] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.583 [2024-07-13 06:21:41.899639] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.583 [2024-07-13 06:21:41.909100] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.583 [2024-07-13 06:21:41.909508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.583 [2024-07-13 06:21:41.909702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.583 [2024-07-13 06:21:41.909728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.583 [2024-07-13 06:21:41.909755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.583 [2024-07-13 06:21:41.909993] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.583 [2024-07-13 06:21:41.910113] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.583 [2024-07-13 06:21:41.910134] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.583 [2024-07-13 06:21:41.910162] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.583 [2024-07-13 06:21:41.912698] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.583 [2024-07-13 06:21:41.921758] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.583 [2024-07-13 06:21:41.922062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.583 [2024-07-13 06:21:41.922229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.583 [2024-07-13 06:21:41.922255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.583 [2024-07-13 06:21:41.922270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.583 [2024-07-13 06:21:41.922428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.583 [2024-07-13 06:21:41.922595] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.583 [2024-07-13 06:21:41.922620] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.583 [2024-07-13 06:21:41.922636] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.583 [2024-07-13 06:21:41.924990] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.583 [2024-07-13 06:21:41.934484] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.583 [2024-07-13 06:21:41.934881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.583 [2024-07-13 06:21:41.935059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.583 [2024-07-13 06:21:41.935085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.583 [2024-07-13 06:21:41.935101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.583 [2024-07-13 06:21:41.935249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.583 [2024-07-13 06:21:41.935476] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.583 [2024-07-13 06:21:41.935501] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.583 [2024-07-13 06:21:41.935517] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.583 [2024-07-13 06:21:41.938084] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.583 [2024-07-13 06:21:41.947248] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.583 [2024-07-13 06:21:41.947796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.583 [2024-07-13 06:21:41.947986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.584 [2024-07-13 06:21:41.948013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.584 [2024-07-13 06:21:41.948032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.584 [2024-07-13 06:21:41.948301] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.584 [2024-07-13 06:21:41.948509] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.584 [2024-07-13 06:21:41.948534] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.584 [2024-07-13 06:21:41.948553] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.584 [2024-07-13 06:21:41.950990] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.584 [2024-07-13 06:21:41.960100] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.584 [2024-07-13 06:21:41.960439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.584 [2024-07-13 06:21:41.960618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.584 [2024-07-13 06:21:41.960646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.584 [2024-07-13 06:21:41.960664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.584 [2024-07-13 06:21:41.960840] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.584 [2024-07-13 06:21:41.961029] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.584 [2024-07-13 06:21:41.961052] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.584 [2024-07-13 06:21:41.961066] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.584 [2024-07-13 06:21:41.963613] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.584 [2024-07-13 06:21:41.972737] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.584 [2024-07-13 06:21:41.973065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.584 [2024-07-13 06:21:41.973243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.584 [2024-07-13 06:21:41.973272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.584 [2024-07-13 06:21:41.973290] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.584 [2024-07-13 06:21:41.973401] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.584 [2024-07-13 06:21:41.973608] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.584 [2024-07-13 06:21:41.973632] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.584 [2024-07-13 06:21:41.973648] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.584 [2024-07-13 06:21:41.976098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.584 [2024-07-13 06:21:41.985522] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.584 [2024-07-13 06:21:41.985827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.584 [2024-07-13 06:21:41.986021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.584 [2024-07-13 06:21:41.986048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.584 [2024-07-13 06:21:41.986064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.584 [2024-07-13 06:21:41.986271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.584 [2024-07-13 06:21:41.986403] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.584 [2024-07-13 06:21:41.986429] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.584 [2024-07-13 06:21:41.986445] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.584 [2024-07-13 06:21:41.989069] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.584 [2024-07-13 06:21:41.992478] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:35.584 [2024-07-13 06:21:41.992605] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.584 [2024-07-13 06:21:41.992623] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.584 [2024-07-13 06:21:41.992635] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.584 [2024-07-13 06:21:41.992699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.584 [2024-07-13 06:21:41.992758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:35.584 [2024-07-13 06:21:41.992760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.584 [2024-07-13 06:21:41.997994] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.584 [2024-07-13 06:21:41.998346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.584 [2024-07-13 06:21:41.998533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.584 [2024-07-13 06:21:41.998559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.584 [2024-07-13 06:21:41.998577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.584 [2024-07-13 06:21:41.998739] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.584 [2024-07-13 06:21:41.998952] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.584 [2024-07-13 06:21:41.998976] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.584 [2024-07-13 06:21:41.998992] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.584 [2024-07-13 06:21:42.001190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.584 [2024-07-13 06:21:42.010449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.584 [2024-07-13 06:21:42.010950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.584 [2024-07-13 06:21:42.011099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.584 [2024-07-13 06:21:42.011125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.584 [2024-07-13 06:21:42.011144] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.584 [2024-07-13 06:21:42.011280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.584 [2024-07-13 06:21:42.011434] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.584 [2024-07-13 06:21:42.011457] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.584 [2024-07-13 06:21:42.011474] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.584 [2024-07-13 06:21:42.013808] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.584 [2024-07-13 06:21:42.023057] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.584 [2024-07-13 06:21:42.023510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.584 [2024-07-13 06:21:42.023682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.584 [2024-07-13 06:21:42.023709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.584 [2024-07-13 06:21:42.023728] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.584 [2024-07-13 06:21:42.023897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.584 [2024-07-13 06:21:42.024059] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.584 [2024-07-13 06:21:42.024082] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.584 [2024-07-13 06:21:42.024099] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.584 [2024-07-13 06:21:42.026498] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.584 [2024-07-13 06:21:42.035634] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.584 [2024-07-13 06:21:42.036141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.584 [2024-07-13 06:21:42.036284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.584 [2024-07-13 06:21:42.036310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.584 [2024-07-13 06:21:42.036329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.584 [2024-07-13 06:21:42.036493] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.584 [2024-07-13 06:21:42.036673] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.584 [2024-07-13 06:21:42.036695] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.584 [2024-07-13 06:21:42.036712] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.584 [2024-07-13 06:21:42.038933] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.584 [2024-07-13 06:21:42.048173] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.585 [2024-07-13 06:21:42.048632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.585 [2024-07-13 06:21:42.048792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.585 [2024-07-13 06:21:42.048819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.585 [2024-07-13 06:21:42.048837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.585 [2024-07-13 06:21:42.048982] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.585 [2024-07-13 06:21:42.049090] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.585 [2024-07-13 06:21:42.049113] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.585 [2024-07-13 06:21:42.049130] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.585 [2024-07-13 06:21:42.051388] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.585 [2024-07-13 06:21:42.060653] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.585 [2024-07-13 06:21:42.061166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.585 [2024-07-13 06:21:42.061377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.585 [2024-07-13 06:21:42.061413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.585 [2024-07-13 06:21:42.061433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.585 [2024-07-13 06:21:42.061636] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.585 [2024-07-13 06:21:42.061802] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.585 [2024-07-13 06:21:42.061824] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.585 [2024-07-13 06:21:42.061841] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.585 [2024-07-13 06:21:42.064005] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.585 [2024-07-13 06:21:42.073098] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.585 [2024-07-13 06:21:42.073578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.585 [2024-07-13 06:21:42.073750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.585 [2024-07-13 06:21:42.073777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.585 [2024-07-13 06:21:42.073796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.585 [2024-07-13 06:21:42.074007] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.585 [2024-07-13 06:21:42.074212] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.585 [2024-07-13 06:21:42.074235] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.585 [2024-07-13 06:21:42.074251] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.585 [2024-07-13 06:21:42.076515] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.585 [2024-07-13 06:21:42.086021] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.585 [2024-07-13 06:21:42.086345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.585 [2024-07-13 06:21:42.086475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.585 [2024-07-13 06:21:42.086501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.585 [2024-07-13 06:21:42.086517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.585 [2024-07-13 06:21:42.086638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.585 [2024-07-13 06:21:42.086825] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.585 [2024-07-13 06:21:42.086847] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.585 [2024-07-13 06:21:42.086862] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.585 [2024-07-13 06:21:42.089124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.843 [2024-07-13 06:21:42.098671] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.844 [2024-07-13 06:21:42.098954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.099087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.099113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.844 [2024-07-13 06:21:42.099136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.844 [2024-07-13 06:21:42.099291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.844 [2024-07-13 06:21:42.099489] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.844 [2024-07-13 06:21:42.099511] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.844 [2024-07-13 06:21:42.099525] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.844 [2024-07-13 06:21:42.101665] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.844 [2024-07-13 06:21:42.111180] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.844 [2024-07-13 06:21:42.111539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.111722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.111747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.844 [2024-07-13 06:21:42.111763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.844 [2024-07-13 06:21:42.111910] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.844 [2024-07-13 06:21:42.112032] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.844 [2024-07-13 06:21:42.112054] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.844 [2024-07-13 06:21:42.112068] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.844 [2024-07-13 06:21:42.114312] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.844 [2024-07-13 06:21:42.123713] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.844 [2024-07-13 06:21:42.124043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.124168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.124194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.844 [2024-07-13 06:21:42.124210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.844 [2024-07-13 06:21:42.124368] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.844 [2024-07-13 06:21:42.124542] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.844 [2024-07-13 06:21:42.124564] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.844 [2024-07-13 06:21:42.124577] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.844 [2024-07-13 06:21:42.126753] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.844 [2024-07-13 06:21:42.136109] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.844 [2024-07-13 06:21:42.136392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.136553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.136578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.844 [2024-07-13 06:21:42.136594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.844 [2024-07-13 06:21:42.136744] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.844 [2024-07-13 06:21:42.136925] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.844 [2024-07-13 06:21:42.136948] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.844 [2024-07-13 06:21:42.136962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.844 [2024-07-13 06:21:42.139081] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.844 [2024-07-13 06:21:42.148310] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.844 [2024-07-13 06:21:42.148642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.148827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.148853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.844 [2024-07-13 06:21:42.148875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.844 [2024-07-13 06:21:42.149003] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.844 [2024-07-13 06:21:42.149225] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.844 [2024-07-13 06:21:42.149247] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.844 [2024-07-13 06:21:42.149261] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.844 [2024-07-13 06:21:42.151631] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.844 [2024-07-13 06:21:42.160715] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.844 [2024-07-13 06:21:42.161061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.161204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.161230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.844 [2024-07-13 06:21:42.161246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.844 [2024-07-13 06:21:42.161424] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.844 [2024-07-13 06:21:42.161594] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.844 [2024-07-13 06:21:42.161616] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.844 [2024-07-13 06:21:42.161630] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.844 [2024-07-13 06:21:42.163930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.844 [2024-07-13 06:21:42.173284] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.844 [2024-07-13 06:21:42.173616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.173754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.173779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.844 [2024-07-13 06:21:42.173794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.844 [2024-07-13 06:21:42.173999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.844 [2024-07-13 06:21:42.174238] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.844 [2024-07-13 06:21:42.174260] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.844 [2024-07-13 06:21:42.174274] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.844 [2024-07-13 06:21:42.176465] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.844 [2024-07-13 06:21:42.185574] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.844 [2024-07-13 06:21:42.185907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.186091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.186117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.844 [2024-07-13 06:21:42.186132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.844 [2024-07-13 06:21:42.186233] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.844 [2024-07-13 06:21:42.186391] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.844 [2024-07-13 06:21:42.186412] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.844 [2024-07-13 06:21:42.186426] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.844 [2024-07-13 06:21:42.188642] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.844 [2024-07-13 06:21:42.197833] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.844 [2024-07-13 06:21:42.198177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.198323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.198349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.844 [2024-07-13 06:21:42.198365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.844 [2024-07-13 06:21:42.198507] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.844 [2024-07-13 06:21:42.198683] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.844 [2024-07-13 06:21:42.198705] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.844 [2024-07-13 06:21:42.198718] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.844 [2024-07-13 06:21:42.200825] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.844 [2024-07-13 06:21:42.210285] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.844 [2024-07-13 06:21:42.210674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.210816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.844 [2024-07-13 06:21:42.210841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.844 [2024-07-13 06:21:42.210857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.844 [2024-07-13 06:21:42.211057] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.844 [2024-07-13 06:21:42.211237] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.844 [2024-07-13 06:21:42.211263] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.844 [2024-07-13 06:21:42.211278] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.844 [2024-07-13 06:21:42.213385] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.845 [2024-07-13 06:21:42.222833] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.845 [2024-07-13 06:21:42.223206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.223319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.223344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.845 [2024-07-13 06:21:42.223360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.845 [2024-07-13 06:21:42.223592] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.845 [2024-07-13 06:21:42.223745] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.845 [2024-07-13 06:21:42.223766] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.845 [2024-07-13 06:21:42.223780] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.845 [2024-07-13 06:21:42.225952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.845 [2024-07-13 06:21:42.235263] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.845 [2024-07-13 06:21:42.235643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.235795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.235820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.845 [2024-07-13 06:21:42.235836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.845 [2024-07-13 06:21:42.235968] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.845 [2024-07-13 06:21:42.236131] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.845 [2024-07-13 06:21:42.236153] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.845 [2024-07-13 06:21:42.236182] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.845 [2024-07-13 06:21:42.238294] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.845 [2024-07-13 06:21:42.247701] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.845 [2024-07-13 06:21:42.248028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.248177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.248202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.845 [2024-07-13 06:21:42.248217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.845 [2024-07-13 06:21:42.248384] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.845 [2024-07-13 06:21:42.248547] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.845 [2024-07-13 06:21:42.248569] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.845 [2024-07-13 06:21:42.248587] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.845 [2024-07-13 06:21:42.250687] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.845 [2024-07-13 06:21:42.260458] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.845 [2024-07-13 06:21:42.260845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.260981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.261007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.845 [2024-07-13 06:21:42.261023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.845 [2024-07-13 06:21:42.261165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.845 [2024-07-13 06:21:42.261328] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.845 [2024-07-13 06:21:42.261350] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.845 [2024-07-13 06:21:42.261364] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.845 [2024-07-13 06:21:42.263526] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.845 [2024-07-13 06:21:42.272815] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.845 [2024-07-13 06:21:42.273190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.273320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.273346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.845 [2024-07-13 06:21:42.273362] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.845 [2024-07-13 06:21:42.273487] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.845 [2024-07-13 06:21:42.273727] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.845 [2024-07-13 06:21:42.273749] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.845 [2024-07-13 06:21:42.273763] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.845 [2024-07-13 06:21:42.275999] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.845 [2024-07-13 06:21:42.285247] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.845 [2024-07-13 06:21:42.285628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.285803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.285828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.845 [2024-07-13 06:21:42.285843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.845 [2024-07-13 06:21:42.286008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.845 [2024-07-13 06:21:42.286188] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.845 [2024-07-13 06:21:42.286210] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.845 [2024-07-13 06:21:42.286224] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.845 [2024-07-13 06:21:42.288533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.845 [2024-07-13 06:21:42.297690] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.845 [2024-07-13 06:21:42.297969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.298155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.298180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.845 [2024-07-13 06:21:42.298196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.845 [2024-07-13 06:21:42.298370] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.845 [2024-07-13 06:21:42.298545] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.845 [2024-07-13 06:21:42.298567] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.845 [2024-07-13 06:21:42.298580] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.845 [2024-07-13 06:21:42.300544] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.845 [2024-07-13 06:21:42.310261] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.845 [2024-07-13 06:21:42.310542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.310694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.310720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.845 [2024-07-13 06:21:42.310736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.845 [2024-07-13 06:21:42.310887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.845 [2024-07-13 06:21:42.311067] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.845 [2024-07-13 06:21:42.311089] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.845 [2024-07-13 06:21:42.311103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.845 [2024-07-13 06:21:42.313434] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.845 [2024-07-13 06:21:42.322715] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.845 [2024-07-13 06:21:42.323080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.323227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.323253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.845 [2024-07-13 06:21:42.323269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.845 [2024-07-13 06:21:42.323452] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.845 [2024-07-13 06:21:42.323661] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.845 [2024-07-13 06:21:42.323683] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.845 [2024-07-13 06:21:42.323697] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.845 [2024-07-13 06:21:42.325896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.845 [2024-07-13 06:21:42.335283] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.845 [2024-07-13 06:21:42.335603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.335754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.845 [2024-07-13 06:21:42.335779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.845 [2024-07-13 06:21:42.335795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.845 [2024-07-13 06:21:42.335976] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.845 [2024-07-13 06:21:42.336183] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.845 [2024-07-13 06:21:42.336204] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.845 [2024-07-13 06:21:42.336218] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.845 [2024-07-13 06:21:42.338303] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.845 [2024-07-13 06:21:42.347809] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.846 [2024-07-13 06:21:42.348180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.846 [2024-07-13 06:21:42.348361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.846 [2024-07-13 06:21:42.348386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:35.846 [2024-07-13 06:21:42.348402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:35.846 [2024-07-13 06:21:42.348560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:35.846 [2024-07-13 06:21:42.348767] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.846 [2024-07-13 06:21:42.348789] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.846 [2024-07-13 06:21:42.348803] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.846 [2024-07-13 06:21:42.351114] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.105 [2024-07-13 06:21:42.360314] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.105 [2024-07-13 06:21:42.360632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.105 [2024-07-13 06:21:42.360785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.105 [2024-07-13 06:21:42.360811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.105 [2024-07-13 06:21:42.360827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.105 [2024-07-13 06:21:42.361020] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.105 [2024-07-13 06:21:42.361157] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.105 [2024-07-13 06:21:42.361179] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.105 [2024-07-13 06:21:42.361193] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.105 [2024-07-13 06:21:42.363495] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.105 [2024-07-13 06:21:42.372740] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.105 [2024-07-13 06:21:42.373111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.105 [2024-07-13 06:21:42.373264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.105 [2024-07-13 06:21:42.373290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.105 [2024-07-13 06:21:42.373306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.105 [2024-07-13 06:21:42.373443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.105 [2024-07-13 06:21:42.373599] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.105 [2024-07-13 06:21:42.373621] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.105 [2024-07-13 06:21:42.373635] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.105 [2024-07-13 06:21:42.375781] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.105 [2024-07-13 06:21:42.385208] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.105 [2024-07-13 06:21:42.385508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.105 [2024-07-13 06:21:42.385684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.105 [2024-07-13 06:21:42.385709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.105 [2024-07-13 06:21:42.385725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.105 [2024-07-13 06:21:42.385888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.105 [2024-07-13 06:21:42.386029] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.105 [2024-07-13 06:21:42.386050] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.105 [2024-07-13 06:21:42.386064] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.105 [2024-07-13 06:21:42.388397] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.105 [2024-07-13 06:21:42.397571] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.105 [2024-07-13 06:21:42.397894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.105 [2024-07-13 06:21:42.398067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.105 [2024-07-13 06:21:42.398093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.105 [2024-07-13 06:21:42.398108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.105 [2024-07-13 06:21:42.398262] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.105 [2024-07-13 06:21:42.398452] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.105 [2024-07-13 06:21:42.398474] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.105 [2024-07-13 06:21:42.398488] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.105 [2024-07-13 06:21:42.400717] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.105 [2024-07-13 06:21:42.410077] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.105 [2024-07-13 06:21:42.410304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.105 [2024-07-13 06:21:42.410478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.410508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.106 [2024-07-13 06:21:42.410525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.106 [2024-07-13 06:21:42.410699] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.106 [2024-07-13 06:21:42.410853] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.106 [2024-07-13 06:21:42.410883] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.106 [2024-07-13 06:21:42.410897] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.106 [2024-07-13 06:21:42.413222] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.106 [2024-07-13 06:21:42.422307] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.106 [2024-07-13 06:21:42.422646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.422794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.422819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.106 [2024-07-13 06:21:42.422835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.106 [2024-07-13 06:21:42.422981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.106 [2024-07-13 06:21:42.423215] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.106 [2024-07-13 06:21:42.423237] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.106 [2024-07-13 06:21:42.423251] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.106 [2024-07-13 06:21:42.425429] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.106 [2024-07-13 06:21:42.434746] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.106 [2024-07-13 06:21:42.435104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.435254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.435280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.106 [2024-07-13 06:21:42.435295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.106 [2024-07-13 06:21:42.435465] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.106 [2024-07-13 06:21:42.435695] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.106 [2024-07-13 06:21:42.435717] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.106 [2024-07-13 06:21:42.435731] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.106 [2024-07-13 06:21:42.437780] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.106 [2024-07-13 06:21:42.447253] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.106 [2024-07-13 06:21:42.447601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.447747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.447772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.106 [2024-07-13 06:21:42.447792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.106 [2024-07-13 06:21:42.447940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.106 [2024-07-13 06:21:42.448087] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.106 [2024-07-13 06:21:42.448109] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.106 [2024-07-13 06:21:42.448123] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.106 [2024-07-13 06:21:42.450274] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.106 [2024-07-13 06:21:42.459575] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.106 [2024-07-13 06:21:42.459855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.460039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.460065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.106 [2024-07-13 06:21:42.460081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.106 [2024-07-13 06:21:42.460270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.106 [2024-07-13 06:21:42.460393] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.106 [2024-07-13 06:21:42.460415] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.106 [2024-07-13 06:21:42.460428] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.106 [2024-07-13 06:21:42.462632] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.106 [2024-07-13 06:21:42.471919] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.106 [2024-07-13 06:21:42.472348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.472458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.472483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.106 [2024-07-13 06:21:42.472499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.106 [2024-07-13 06:21:42.472692] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.106 [2024-07-13 06:21:42.472830] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.106 [2024-07-13 06:21:42.472851] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.106 [2024-07-13 06:21:42.472888] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.106 [2024-07-13 06:21:42.475094] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.106 [2024-07-13 06:21:42.484403] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.106 [2024-07-13 06:21:42.484698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.484881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.484908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.106 [2024-07-13 06:21:42.484924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.106 [2024-07-13 06:21:42.485107] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.106 [2024-07-13 06:21:42.485226] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.106 [2024-07-13 06:21:42.485247] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.106 [2024-07-13 06:21:42.485261] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.106 [2024-07-13 06:21:42.487472] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.106 [2024-07-13 06:21:42.497073] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.106 [2024-07-13 06:21:42.497400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.497542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.497568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.106 [2024-07-13 06:21:42.497583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.106 [2024-07-13 06:21:42.497774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.106 [2024-07-13 06:21:42.497954] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.106 [2024-07-13 06:21:42.497976] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.106 [2024-07-13 06:21:42.497990] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.106 [2024-07-13 06:21:42.500038] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.106 [2024-07-13 06:21:42.509554] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.106 [2024-07-13 06:21:42.509894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.510059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.510085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.106 [2024-07-13 06:21:42.510100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.106 [2024-07-13 06:21:42.510242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.106 [2024-07-13 06:21:42.510385] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.106 [2024-07-13 06:21:42.510406] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.106 [2024-07-13 06:21:42.510421] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.106 [2024-07-13 06:21:42.512563] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.106 [2024-07-13 06:21:42.522113] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.106 [2024-07-13 06:21:42.522470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.522614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.522640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.106 [2024-07-13 06:21:42.522656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.106 [2024-07-13 06:21:42.522792] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.106 [2024-07-13 06:21:42.522975] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.106 [2024-07-13 06:21:42.522997] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.106 [2024-07-13 06:21:42.523012] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.106 [2024-07-13 06:21:42.525343] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.106 [2024-07-13 06:21:42.534424] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.106 [2024-07-13 06:21:42.534751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.106 [2024-07-13 06:21:42.534914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.107 [2024-07-13 06:21:42.534941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.107 [2024-07-13 06:21:42.534957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.107 [2024-07-13 06:21:42.535073] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.107 [2024-07-13 06:21:42.535235] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.107 [2024-07-13 06:21:42.535256] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.107 [2024-07-13 06:21:42.535270] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.107 [2024-07-13 06:21:42.537545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.107 [2024-07-13 06:21:42.546780] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.107 [2024-07-13 06:21:42.547105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.107 [2024-07-13 06:21:42.547277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.107 [2024-07-13 06:21:42.547303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.107 [2024-07-13 06:21:42.547318] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.107 [2024-07-13 06:21:42.547460] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.107 [2024-07-13 06:21:42.547634] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.107 [2024-07-13 06:21:42.547656] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.107 [2024-07-13 06:21:42.547670] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.107 [2024-07-13 06:21:42.550034] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.107 [2024-07-13 06:21:42.559295] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.107 [2024-07-13 06:21:42.559590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.107 [2024-07-13 06:21:42.559715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.107 [2024-07-13 06:21:42.559740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.107 [2024-07-13 06:21:42.559756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.107 [2024-07-13 06:21:42.559928] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.107 [2024-07-13 06:21:42.560134] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.107 [2024-07-13 06:21:42.560160] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.107 [2024-07-13 06:21:42.560175] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.107 [2024-07-13 06:21:42.562484] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.107 [2024-07-13 06:21:42.571564] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.107 [2024-07-13 06:21:42.571901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.107 [2024-07-13 06:21:42.572013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.107 [2024-07-13 06:21:42.572039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.107 [2024-07-13 06:21:42.572055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.107 [2024-07-13 06:21:42.572264] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.107 [2024-07-13 06:21:42.572442] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.107 [2024-07-13 06:21:42.572463] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.107 [2024-07-13 06:21:42.572477] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.107 [2024-07-13 06:21:42.574581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.107 [2024-07-13 06:21:42.583791] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.107 [2024-07-13 06:21:42.584124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.107 [2024-07-13 06:21:42.584251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.107 [2024-07-13 06:21:42.584276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.107 [2024-07-13 06:21:42.584292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.107 [2024-07-13 06:21:42.584408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.107 [2024-07-13 06:21:42.584602] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.107 [2024-07-13 06:21:42.584623] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.107 [2024-07-13 06:21:42.584637] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.107 [2024-07-13 06:21:42.586835] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.107 [2024-07-13 06:21:42.596412] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.107 [2024-07-13 06:21:42.596764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.107 [2024-07-13 06:21:42.596918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.107 [2024-07-13 06:21:42.596944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.107 [2024-07-13 06:21:42.596960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.107 [2024-07-13 06:21:42.597134] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.107 [2024-07-13 06:21:42.597304] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.107 [2024-07-13 06:21:42.597326] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.107 [2024-07-13 06:21:42.597344] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.107 [2024-07-13 06:21:42.599342] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.107 [2024-07-13 06:21:42.608910] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.107 [2024-07-13 06:21:42.609195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.107 [2024-07-13 06:21:42.609372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.107 [2024-07-13 06:21:42.609397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.107 [2024-07-13 06:21:42.609413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.107 [2024-07-13 06:21:42.609608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.107 [2024-07-13 06:21:42.609742] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.107 [2024-07-13 06:21:42.609763] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.107 [2024-07-13 06:21:42.609777] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.107 [2024-07-13 06:21:42.612164] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.366 [2024-07-13 06:21:42.621739] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.366 [2024-07-13 06:21:42.622098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.366 [2024-07-13 06:21:42.622220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.366 [2024-07-13 06:21:42.622245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.366 [2024-07-13 06:21:42.622261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.366 [2024-07-13 06:21:42.622407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.366 [2024-07-13 06:21:42.622562] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.366 [2024-07-13 06:21:42.622584] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.366 [2024-07-13 06:21:42.622599] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.366 [2024-07-13 06:21:42.624774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.366 [2024-07-13 06:21:42.634307] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.366 [2024-07-13 06:21:42.634625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.366 [2024-07-13 06:21:42.634771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.366 [2024-07-13 06:21:42.634797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.366 [2024-07-13 06:21:42.634812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.366 [2024-07-13 06:21:42.634942] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.366 [2024-07-13 06:21:42.635101] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.366 [2024-07-13 06:21:42.635124] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.366 [2024-07-13 06:21:42.635138] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.367 [2024-07-13 06:21:42.637286] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.367 [2024-07-13 06:21:42.646831] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.367 [2024-07-13 06:21:42.647149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.367 [2024-07-13 06:21:42.647296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.367 [2024-07-13 06:21:42.647322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.367 [2024-07-13 06:21:42.647338] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.367 [2024-07-13 06:21:42.647471] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.367 [2024-07-13 06:21:42.647647] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.367 [2024-07-13 06:21:42.647669] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.367 [2024-07-13 06:21:42.647682] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.367 [2024-07-13 06:21:42.649826] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.367 [2024-07-13 06:21:42.659102] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.367 [2024-07-13 06:21:42.659424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.367 [2024-07-13 06:21:42.659580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.367 [2024-07-13 06:21:42.659608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.367 [2024-07-13 06:21:42.659623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.367 [2024-07-13 06:21:42.659761] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.367 [2024-07-13 06:21:42.659972] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.367 [2024-07-13 06:21:42.659995] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.367 [2024-07-13 06:21:42.660010] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.367 [2024-07-13 06:21:42.662315] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.367 [2024-07-13 06:21:42.671398] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.367 [2024-07-13 06:21:42.671709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.367 [2024-07-13 06:21:42.671876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.367 [2024-07-13 06:21:42.671903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.367 [2024-07-13 06:21:42.671919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.367 [2024-07-13 06:21:42.672023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.367 [2024-07-13 06:21:42.672192] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.367 [2024-07-13 06:21:42.672214] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.367 [2024-07-13 06:21:42.672228] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.367 [2024-07-13 06:21:42.674516] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.367 [2024-07-13 06:21:42.684036] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.367 [2024-07-13 06:21:42.684325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.367 [2024-07-13 06:21:42.684458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.367 [2024-07-13 06:21:42.684484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.367 [2024-07-13 06:21:42.684500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.367 [2024-07-13 06:21:42.684684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.367 [2024-07-13 06:21:42.684837] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.367 [2024-07-13 06:21:42.684858] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.367 [2024-07-13 06:21:42.684883] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.367 [2024-07-13 06:21:42.687266] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.367 [2024-07-13 06:21:42.696665] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.367 [2024-07-13 06:21:42.697049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.367 [2024-07-13 06:21:42.697175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.367 [2024-07-13 06:21:42.697203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.367 [2024-07-13 06:21:42.697219] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.367 [2024-07-13 06:21:42.697398] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.367 [2024-07-13 06:21:42.697508] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.367 [2024-07-13 06:21:42.697530] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.367 [2024-07-13 06:21:42.697544] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.367 [2024-07-13 06:21:42.699746] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.367 [2024-07-13 06:21:42.709301] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.367 [2024-07-13 06:21:42.709646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.367 [2024-07-13 06:21:42.709797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.367 [2024-07-13 06:21:42.709825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.367 [2024-07-13 06:21:42.709841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.367 [2024-07-13 06:21:42.710040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.367 [2024-07-13 06:21:42.710209] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.367 [2024-07-13 06:21:42.710231] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.367 [2024-07-13 06:21:42.710245] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.367 [2024-07-13 06:21:42.712512] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.367 [2024-07-13 06:21:42.721748] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.367 [2024-07-13 06:21:42.722065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.367 [2024-07-13 06:21:42.722195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.367 [2024-07-13 06:21:42.722221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.367 [2024-07-13 06:21:42.722237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.367 [2024-07-13 06:21:42.722390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.367 [2024-07-13 06:21:42.722553] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.367 [2024-07-13 06:21:42.722575] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.367 [2024-07-13 06:21:42.722589] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.367 [2024-07-13 06:21:42.724885] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.367 [2024-07-13 06:21:42.734479] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.367 [2024-07-13 06:21:42.734740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.367 [2024-07-13 06:21:42.734887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.367 [2024-07-13 06:21:42.734914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.367 [2024-07-13 06:21:42.734930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.367 [2024-07-13 06:21:42.735154] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.367 [2024-07-13 06:21:42.735317] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.367 [2024-07-13 06:21:42.735340] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.367 [2024-07-13 06:21:42.735354] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.367 [2024-07-13 06:21:42.737725] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.367 06:21:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:36.367 06:21:42 -- common/autotest_common.sh@852 -- # return 0 00:26:36.367 06:21:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:36.367 06:21:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:36.367 06:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:36.367 [2024-07-13 06:21:42.747010] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.367 [2024-07-13 06:21:42.747368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.367 [2024-07-13 06:21:42.747504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.367 [2024-07-13 06:21:42.747529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.367 [2024-07-13 06:21:42.747546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.367 [2024-07-13 06:21:42.747689] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.367 [2024-07-13 06:21:42.747907] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.367 [2024-07-13 06:21:42.747930] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.367 [2024-07-13 06:21:42.747945] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.367 [2024-07-13 06:21:42.750229] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.367 06:21:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.367 [2024-07-13 06:21:42.759457] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.367 06:21:42 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.367 06:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.367 06:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:36.367 [2024-07-13 06:21:42.759784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.368 [2024-07-13 06:21:42.759947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.368 [2024-07-13 06:21:42.759974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.368 [2024-07-13 06:21:42.759989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.368 [2024-07-13 06:21:42.760152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.368 [2024-07-13 06:21:42.760307] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.368 [2024-07-13 06:21:42.760329] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.368 [2024-07-13 06:21:42.760343] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.368 [2024-07-13 06:21:42.761685] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.368 [2024-07-13 06:21:42.762804] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.368 06:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.368 06:21:42 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:36.368 06:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.368 06:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:36.368 [2024-07-13 06:21:42.772188] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.368 [2024-07-13 06:21:42.772551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.368 [2024-07-13 06:21:42.772703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.368 [2024-07-13 06:21:42.772730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.368 [2024-07-13 06:21:42.772746] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.368 [2024-07-13 06:21:42.772914] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.368 [2024-07-13 06:21:42.773078] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.368 [2024-07-13 06:21:42.773100] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.368 [2024-07-13 06:21:42.773114] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.368 [2024-07-13 06:21:42.775139] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.368 [2024-07-13 06:21:42.784499] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.368 [2024-07-13 06:21:42.784791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.368 [2024-07-13 06:21:42.784957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.368 [2024-07-13 06:21:42.784986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.368 [2024-07-13 06:21:42.785002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.368 [2024-07-13 06:21:42.785154] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.368 [2024-07-13 06:21:42.785283] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.368 [2024-07-13 06:21:42.785309] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.368 [2024-07-13 06:21:42.785324] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.368 [2024-07-13 06:21:42.787653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.368 [2024-07-13 06:21:42.797058] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.368 [2024-07-13 06:21:42.797522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.368 [2024-07-13 06:21:42.797689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.368 [2024-07-13 06:21:42.797715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.368 [2024-07-13 06:21:42.797733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.368 [2024-07-13 06:21:42.797886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.368 [2024-07-13 06:21:42.798066] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.368 [2024-07-13 06:21:42.798089] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.368 [2024-07-13 06:21:42.798105] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.368 [2024-07-13 06:21:42.800196] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.368 Malloc0 00:26:36.368 06:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.368 06:21:42 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:36.368 06:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.368 06:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:36.368 [2024-07-13 06:21:42.809690] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.368 [2024-07-13 06:21:42.810039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.368 [2024-07-13 06:21:42.810193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.368 [2024-07-13 06:21:42.810219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.368 [2024-07-13 06:21:42.810235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.368 [2024-07-13 06:21:42.810415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.368 [2024-07-13 06:21:42.810622] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.368 [2024-07-13 06:21:42.810644] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.368 [2024-07-13 06:21:42.810659] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.368 06:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.368 06:21:42 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:36.368 06:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.368 06:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:36.368 [2024-07-13 06:21:42.812947] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.368 06:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.368 06:21:42 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.368 06:21:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.368 06:21:42 -- common/autotest_common.sh@10 -- # set +x 00:26:36.368 [2024-07-13 06:21:42.822398] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.368 [2024-07-13 06:21:42.822695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.368 [2024-07-13 06:21:42.822905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.368 [2024-07-13 06:21:42.822932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2400 with addr=10.0.0.2, port=4420 00:26:36.368 [2024-07-13 06:21:42.822949] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2400 is same with the state(5) to be set 00:26:36.368 [2024-07-13 06:21:42.823086] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2400 (9): Bad file descriptor 00:26:36.368 [2024-07-13 06:21:42.823089] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.368 [2024-07-13 06:21:42.823314] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:36.368 [2024-07-13 06:21:42.823335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:36.368 [2024-07-13 06:21:42.823349] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:36.368 [2024-07-13 06:21:42.825483] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:36.368 06:21:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.368 06:21:42 -- host/bdevperf.sh@38 -- # wait 1232101 00:26:36.368 [2024-07-13 06:21:42.835283] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:36.625 [2024-07-13 06:21:42.909298] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:44.722 00:26:44.722 Latency(us) 00:26:44.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:44.722 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:44.722 Verification LBA range: start 0x0 length 0x4000 00:26:44.722 Nvme1n1 : 15.01 9331.40 36.45 14702.12 0.00 5310.06 682.67 20291.89 00:26:44.722 =================================================================================================================== 00:26:44.722 Total : 9331.40 36.45 14702.12 0.00 5310.06 682.67 20291.89 00:26:45.287 06:21:51 -- host/bdevperf.sh@39 -- # sync 00:26:45.287 06:21:51 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:45.287 06:21:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:45.287 06:21:51 -- common/autotest_common.sh@10 -- # set +x 00:26:45.288 06:21:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:45.288 06:21:51 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:45.288 06:21:51 -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:45.288 06:21:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:45.288 06:21:51 -- nvmf/common.sh@116 -- # sync 00:26:45.288 06:21:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:45.288 06:21:51 -- nvmf/common.sh@119 -- # set +e 00:26:45.288 06:21:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:45.288 06:21:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:45.288 rmmod nvme_tcp 00:26:45.288 rmmod nvme_fabrics 00:26:45.288 rmmod nvme_keyring 00:26:45.288 06:21:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:45.288 06:21:51 -- nvmf/common.sh@123 -- # set -e 00:26:45.288 06:21:51 -- nvmf/common.sh@124 -- # return 0 00:26:45.288 06:21:51 -- nvmf/common.sh@477 -- # '[' -n 1232795 ']' 00:26:45.288 06:21:51 -- nvmf/common.sh@478 -- # killprocess 1232795 00:26:45.288 06:21:51 -- common/autotest_common.sh@926 -- # '[' -z 1232795 ']' 00:26:45.288 06:21:51 -- common/autotest_common.sh@930 -- # kill -0 1232795 00:26:45.288 06:21:51 -- common/autotest_common.sh@931 -- # uname 00:26:45.288 06:21:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:45.288 06:21:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1232795 00:26:45.288 06:21:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:45.288 06:21:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:45.288 06:21:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1232795' 00:26:45.288 killing process with pid 1232795 00:26:45.288 06:21:51 -- common/autotest_common.sh@945 -- # kill 1232795 00:26:45.288 06:21:51 -- common/autotest_common.sh@950 -- # wait 1232795 00:26:45.545 06:21:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:45.546 06:21:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:45.546 06:21:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:45.546 06:21:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:45.546 06:21:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:45.546 06:21:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:45.546 06:21:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:45.546 06:21:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.453 06:21:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:47.453 00:26:47.453 real 0m22.988s 00:26:47.453 user 0m58.828s 00:26:47.453 sys 0m5.497s 00:26:47.453 06:21:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:47.453 06:21:53 -- common/autotest_common.sh@10 -- # set +x 00:26:47.453 ************************************ 00:26:47.453 END TEST nvmf_bdevperf 00:26:47.453 ************************************ 00:26:47.453 06:21:53 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:47.453 06:21:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:47.453 06:21:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:47.453 06:21:53 -- common/autotest_common.sh@10 -- # set +x 00:26:47.453 ************************************ 00:26:47.453 START TEST nvmf_target_disconnect 00:26:47.453 ************************************ 00:26:47.453 06:21:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:47.711 * Looking for test storage... 00:26:47.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:47.711 06:21:53 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:47.711 06:21:53 -- nvmf/common.sh@7 -- # uname -s 00:26:47.711 06:21:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:47.711 06:21:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:47.711 06:21:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:47.711 06:21:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:47.711 06:21:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:47.711 06:21:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:47.711 06:21:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:47.711 06:21:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:47.711 06:21:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.711 06:21:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.711 06:21:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:47.711 06:21:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:47.711 06:21:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.711 06:21:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.711 06:21:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:47.711 06:21:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:47.711 06:21:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.711 06:21:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.711 06:21:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.711 06:21:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.711 06:21:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.711 06:21:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.711 06:21:53 -- paths/export.sh@5 -- # export PATH 00:26:47.711 06:21:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.711 06:21:53 -- nvmf/common.sh@46 -- # : 0 00:26:47.711 06:21:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:47.711 06:21:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:47.711 06:21:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:47.711 06:21:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.711 06:21:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.711 06:21:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:47.711 06:21:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:47.711 06:21:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:47.711 06:21:53 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:47.711 06:21:53 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:47.711 06:21:53 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:47.711 06:21:53 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:26:47.711 06:21:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:47.711 06:21:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.711 06:21:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:47.711 06:21:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:47.711 06:21:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:47.711 06:21:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.712 06:21:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:47.712 06:21:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.712 06:21:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:47.712 06:21:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:47.712 06:21:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:47.712 06:21:53 -- common/autotest_common.sh@10 -- # set +x 00:26:49.615 06:21:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:49.615 06:21:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:49.615 06:21:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:49.615 06:21:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:49.615 06:21:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:49.615 06:21:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:49.615 06:21:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:49.615 06:21:55 -- nvmf/common.sh@294 -- # net_devs=() 00:26:49.615 06:21:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:49.615 06:21:55 -- nvmf/common.sh@295 -- # e810=() 00:26:49.616 06:21:55 -- nvmf/common.sh@295 -- # local -ga e810 00:26:49.616 06:21:55 -- nvmf/common.sh@296 -- # x722=() 00:26:49.616 06:21:55 -- nvmf/common.sh@296 -- # local -ga x722 00:26:49.616 06:21:55 -- nvmf/common.sh@297 -- # mlx=() 00:26:49.616 06:21:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:49.616 06:21:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.616 06:21:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.616 06:21:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.616 06:21:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.616 06:21:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.616 06:21:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.616 06:21:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.616 06:21:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.616 06:21:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.616 06:21:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.616 06:21:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.616 06:21:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:49.616 06:21:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:49.616 06:21:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:49.616 06:21:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:49.616 06:21:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:49.616 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:49.616 06:21:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:49.616 06:21:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:49.616 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:49.616 06:21:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:49.616 06:21:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:49.616 06:21:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.616 06:21:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:49.616 06:21:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.616 06:21:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:49.616 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:49.616 06:21:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.616 06:21:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:49.616 06:21:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.616 06:21:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:49.616 06:21:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.616 06:21:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:49.616 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:49.616 06:21:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.616 06:21:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:49.616 06:21:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:49.616 06:21:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:49.616 06:21:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.616 06:21:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.616 06:21:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.616 06:21:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:49.616 06:21:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.616 06:21:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.616 06:21:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:49.616 06:21:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.616 06:21:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.616 06:21:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:49.616 06:21:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:49.616 06:21:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.616 06:21:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.616 06:21:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.616 06:21:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.616 06:21:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:49.616 06:21:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.616 06:21:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:49.616 06:21:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:49.616 06:21:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:49.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:26:49.616 00:26:49.616 --- 10.0.0.2 ping statistics --- 00:26:49.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.616 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:26:49.616 06:21:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:49.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:26:49.616 00:26:49.616 --- 10.0.0.1 ping statistics --- 00:26:49.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.616 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:26:49.616 06:21:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.616 06:21:55 -- nvmf/common.sh@410 -- # return 0 00:26:49.616 06:21:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:49.616 06:21:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.616 06:21:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:49.616 06:21:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.616 06:21:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:49.616 06:21:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:49.616 06:21:55 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:49.616 06:21:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:49.616 06:21:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:49.616 06:21:55 -- common/autotest_common.sh@10 -- # set +x 00:26:49.616 ************************************ 00:26:49.616 START TEST nvmf_target_disconnect_tc1 00:26:49.616 ************************************ 00:26:49.616 06:21:55 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:26:49.616 06:21:55 -- host/target_disconnect.sh@32 -- # set +e 00:26:49.616 06:21:55 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:49.616 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.616 [2024-07-13 06:21:56.033735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.616 [2024-07-13 06:21:56.033946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.616 [2024-07-13 06:21:56.033974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a2920 with addr=10.0.0.2, port=4420 00:26:49.616 [2024-07-13 06:21:56.034023] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:49.616 [2024-07-13 06:21:56.034042] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:49.616 [2024-07-13 06:21:56.034055] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:49.616 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:49.616 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:49.616 Initializing NVMe Controllers 00:26:49.616 06:21:56 -- host/target_disconnect.sh@33 -- # trap - ERR 00:26:49.617 06:21:56 -- host/target_disconnect.sh@33 -- # print_backtrace 00:26:49.617 06:21:56 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:26:49.617 06:21:56 -- common/autotest_common.sh@1132 -- # return 0 00:26:49.617 06:21:56 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:26:49.617 06:21:56 -- host/target_disconnect.sh@41 -- # set -e 00:26:49.617 00:26:49.617 real 0m0.096s 00:26:49.617 user 0m0.037s 00:26:49.617 sys 0m0.059s 00:26:49.617 06:21:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:49.617 06:21:56 -- common/autotest_common.sh@10 -- # set +x 00:26:49.617 ************************************ 00:26:49.617 END TEST nvmf_target_disconnect_tc1 00:26:49.617 ************************************ 00:26:49.617 06:21:56 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:49.617 06:21:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:49.617 06:21:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:49.617 06:21:56 -- common/autotest_common.sh@10 -- # set +x 00:26:49.617 ************************************ 00:26:49.617 START TEST nvmf_target_disconnect_tc2 00:26:49.617 ************************************ 00:26:49.617 06:21:56 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:26:49.617 06:21:56 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:26:49.617 06:21:56 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:49.617 06:21:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:49.617 06:21:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:49.617 06:21:56 -- common/autotest_common.sh@10 -- # set +x 00:26:49.617 06:21:56 -- nvmf/common.sh@469 -- # nvmfpid=1235984 00:26:49.617 06:21:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:49.617 06:21:56 -- nvmf/common.sh@470 -- # waitforlisten 1235984 00:26:49.617 06:21:56 -- common/autotest_common.sh@819 -- # '[' -z 1235984 ']' 00:26:49.617 06:21:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.617 06:21:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:49.617 06:21:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.617 06:21:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:49.617 06:21:56 -- common/autotest_common.sh@10 -- # set +x 00:26:49.617 [2024-07-13 06:21:56.120501] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:49.617 [2024-07-13 06:21:56.120579] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:49.875 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.875 [2024-07-13 06:21:56.183886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:49.875 [2024-07-13 06:21:56.288371] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:49.875 [2024-07-13 06:21:56.288526] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:49.875 [2024-07-13 06:21:56.288543] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:49.875 [2024-07-13 06:21:56.288555] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:49.875 [2024-07-13 06:21:56.288647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:49.875 [2024-07-13 06:21:56.288710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:49.875 [2024-07-13 06:21:56.288775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:49.875 [2024-07-13 06:21:56.288778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:50.808 06:21:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:50.808 06:21:57 -- common/autotest_common.sh@852 -- # return 0 00:26:50.808 06:21:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:50.808 06:21:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:50.808 06:21:57 -- common/autotest_common.sh@10 -- # set +x 00:26:50.808 06:21:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.808 06:21:57 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:50.808 06:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.808 06:21:57 -- common/autotest_common.sh@10 -- # set +x 00:26:50.808 Malloc0 00:26:50.808 06:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.808 06:21:57 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:50.808 06:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.808 06:21:57 -- common/autotest_common.sh@10 -- # set +x 00:26:50.808 [2024-07-13 06:21:57.105501] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.808 06:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.808 06:21:57 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:50.808 06:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.808 06:21:57 -- common/autotest_common.sh@10 -- # set +x 00:26:50.808 06:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.808 06:21:57 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:50.808 06:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.808 06:21:57 -- common/autotest_common.sh@10 -- # set +x 00:26:50.808 06:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.808 06:21:57 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.808 06:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.808 06:21:57 -- common/autotest_common.sh@10 -- # set +x 00:26:50.808 [2024-07-13 06:21:57.133752] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.808 06:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.808 06:21:57 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:50.808 06:21:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:50.808 06:21:57 -- common/autotest_common.sh@10 -- # set +x 00:26:50.808 06:21:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:50.808 06:21:57 -- host/target_disconnect.sh@50 -- # reconnectpid=1236141 00:26:50.808 06:21:57 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:50.808 06:21:57 -- host/target_disconnect.sh@52 -- # sleep 2 00:26:50.808 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.715 06:21:59 -- host/target_disconnect.sh@53 -- # kill -9 1235984 00:26:52.715 06:21:59 -- host/target_disconnect.sh@55 -- # sleep 2 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 [2024-07-13 06:21:59.157517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Read completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.715 Write completed with error (sct=0, sc=8) 00:26:52.715 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 [2024-07-13 06:21:59.157887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 [2024-07-13 06:21:59.158234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Write completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 Read completed with error (sct=0, sc=8) 00:26:52.716 starting I/O failed 00:26:52.716 [2024-07-13 06:21:59.158524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:52.716 [2024-07-13 06:21:59.158802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.158966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.159005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.716 qpair failed and we were unable to recover it. 00:26:52.716 [2024-07-13 06:21:59.159137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.159260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.159284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.716 qpair failed and we were unable to recover it. 00:26:52.716 [2024-07-13 06:21:59.159425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.159567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.159591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.716 qpair failed and we were unable to recover it. 00:26:52.716 [2024-07-13 06:21:59.159796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.160009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.160035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.716 qpair failed and we were unable to recover it. 00:26:52.716 [2024-07-13 06:21:59.160207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.160472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.160523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.716 qpair failed and we were unable to recover it. 00:26:52.716 [2024-07-13 06:21:59.160766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.160927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.160958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.716 qpair failed and we were unable to recover it. 00:26:52.716 [2024-07-13 06:21:59.161086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.161230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.161255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.716 qpair failed and we were unable to recover it. 00:26:52.716 [2024-07-13 06:21:59.161419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.161638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.161662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.716 qpair failed and we were unable to recover it. 00:26:52.716 [2024-07-13 06:21:59.161879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.162021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.162046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.716 qpair failed and we were unable to recover it. 00:26:52.716 [2024-07-13 06:21:59.162179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.162322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.162346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.716 qpair failed and we were unable to recover it. 00:26:52.716 [2024-07-13 06:21:59.162485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.716 [2024-07-13 06:21:59.162597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.162621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.162766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.162892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.162917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.163036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.163156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.163179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.163340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.163486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.163510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.163630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.163775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.163800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.163943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.164066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.164090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.164282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.164593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.164618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.164780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.164965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.164991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.165137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.165292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.165317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.165460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.165604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.165629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.165780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.165892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.165917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.166034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.166158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.166182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.166357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.166533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.166558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.166703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.166885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.166910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.167059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.167186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.167211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.167353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.167521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.167546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.167671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.167790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.167813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.167954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.168098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.168122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.168252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.168456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.168481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.168602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.168746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.168770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.168891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.169017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.169042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.169157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.169275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.169299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.169473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.169593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.169616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.169737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.169887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.169911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.170024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.170736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.170761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.170937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.171050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.171074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.171220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.171340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.171365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.171510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.171658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.171683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.171827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.171987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.172012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.172160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.172283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.172308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.172456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.172598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.172623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.717 qpair failed and we were unable to recover it. 00:26:52.717 [2024-07-13 06:21:59.172799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.717 [2024-07-13 06:21:59.172950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.172976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.173125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.173279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.173303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.173476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.173592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.173616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.173732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.173906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.173932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.174081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.174195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.174220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.174369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.174493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.174517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.174674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.174811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.174835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.175022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.175136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.175160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.175348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.175485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.175509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.175653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.175798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.175823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.175974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.176090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.176114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.176295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.176461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.176485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.176627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.176740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.176763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.176939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.177084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.177109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.177262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.177383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.177406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.177552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.177697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.177725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.177857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.178014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.178039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.178190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.178360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.178385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.178507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.178680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.178705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.178851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.179004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.179028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.179212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.179358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.179383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.179498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.179614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.179639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.179761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.179929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.179955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.180094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.180209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.180233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.180359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.180480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.180505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.180650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.180820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.180846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.181008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.181151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.181176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.181300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.181470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.181495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.181668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.181813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.181837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.182002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.182141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.182165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.182313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.182454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.718 [2024-07-13 06:21:59.182478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.718 qpair failed and we were unable to recover it. 00:26:52.718 [2024-07-13 06:21:59.182627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.182769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.182794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.182973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.183152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.183175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.183297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.183440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.183464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.183635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.183756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.183781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.183946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.184124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.184148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.184279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.184391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.184415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.184587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.184757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.184782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.184931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.185052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.185077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.185222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.185367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.185392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.185550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.185724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.185748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.185935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.186106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.186131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.186280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.186421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.186445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.186643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.186789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.186813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.187047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.187192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.187217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.187361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.187500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.187524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.187667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.187804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.187832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.188014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.188132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.188157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.188333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.188475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.188517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.188674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.188878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.188904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.189054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.189226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.189251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.189388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.189500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.189524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.189720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.189891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.189916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.190036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.190205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.190230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.190373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.190517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.190541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.190678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.190841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.190874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.719 [2024-07-13 06:21:59.191045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.191236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.719 [2024-07-13 06:21:59.191264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.719 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.191407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.191546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.191570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.191717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.191842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.191872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.192046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.192208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.192235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.192363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.192502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.192526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.192644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.192784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.192808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.192950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.193092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.193117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.193265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.193415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.193440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.193586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.193748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.193775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.193950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.194093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.194118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.194268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.194416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.194445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.194567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.194764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.194788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.194915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.195082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.195108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.195300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.195468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.195509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.195696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.195839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.195863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.196021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.196223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.196247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.196419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.196530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.196555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.196725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.196889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.196917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.197052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.197243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.197268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.197389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.197508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.197532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.197676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.197788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.197812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.197972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.198118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.198142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.198314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.198476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.198503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.198662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.198786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.198825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.198981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.199147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.199172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.199320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.199433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.199457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.199605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.199782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.199810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.199969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.200153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.200179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.200355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.200526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.200550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.720 [2024-07-13 06:21:59.200727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.200879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.720 [2024-07-13 06:21:59.200907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.720 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.201075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.201241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.201265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.201416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.201560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.201585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.201755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.201893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.201921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.202116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.202282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.202307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.202479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.202589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.202613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.202736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.202858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.202888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.203039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.203214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.203238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.203383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.203501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.203526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.203672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.203785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.203809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.203961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.204104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.204128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.204272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.204440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.204464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.204649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.204839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.204864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.205040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.205233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.205257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.205409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.205577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.205602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.205750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.205859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.205905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.206023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.206163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.206187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.206312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.206433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.206457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.206603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.206747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.206771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.206943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.207088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.207112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.207259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.207403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.207427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.207567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.207740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.207764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.207912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.208034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.208059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.208171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.208318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.208342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.208529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.208676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.208700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.208877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.209058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.209082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.209259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.209404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.209428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.209607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.209725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.209749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.209863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.210021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.210046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.210175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.210343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.210367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.210551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.210665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.210689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.721 [2024-07-13 06:21:59.210813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.211003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.721 [2024-07-13 06:21:59.211031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.721 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.211188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.211332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.211360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.211483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.211625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.211649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.211819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.211962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.211987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.212102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.212281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.212305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.212453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.212598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.212622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.212795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.212949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.212975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.213119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.213281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.213305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.213451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.213559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.213582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.213733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.213894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.213922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.214087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.214228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.214253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.214398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.214520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.214550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.214672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.214788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.214812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.214959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.215131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.215158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.215355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.215540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.215568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.215768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.215914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.215939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.216052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.216197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.216222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.216342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.216462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.216485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.216657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.216816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.216844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.216984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.217125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.217164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.217347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.217558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.217584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.217768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.217918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.217944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.218093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.218243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.218267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.218438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.218607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.218632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.218805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.218950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.218975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.219098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.219240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.219264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.219431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.219591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.219626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.219778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.219923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.219951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.220118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.220268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.220296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.220464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.220614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.220638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.220777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.220895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.220920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.722 qpair failed and we were unable to recover it. 00:26:52.722 [2024-07-13 06:21:59.221065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.221249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.722 [2024-07-13 06:21:59.221295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.723 qpair failed and we were unable to recover it. 00:26:52.723 [2024-07-13 06:21:59.221436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.723 [2024-07-13 06:21:59.221628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.723 [2024-07-13 06:21:59.221654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.723 qpair failed and we were unable to recover it. 00:26:52.723 [2024-07-13 06:21:59.221783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.221929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.221954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.222078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.222219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.222252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.222413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.222539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.222573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.222762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.222930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.222959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.223130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.223263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.223289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.223502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.223627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.223651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.223834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.223986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.224011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.224138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.224255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.224280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.224451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.224578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.224602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.224747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.224895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.224920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.225038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.225181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.225205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.225379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.225521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.225545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.225667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.225784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.225809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.225984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.226159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.226184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.226306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.226493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.226518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.226644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.226786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.226810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.226956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.227078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.227102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.227221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.227358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.227383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.227534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.227656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.227681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.227799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.227948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.227974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.228153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.228294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.228317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.228495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.228634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.996 [2024-07-13 06:21:59.228661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.996 qpair failed and we were unable to recover it. 00:26:52.996 [2024-07-13 06:21:59.228831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.228984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.229009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.229185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.229331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.229356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.229559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.229715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.229742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.229897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.230040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.230081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.230230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.230387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.230411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.230556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.230726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.230753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.230895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.231037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.231062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.231179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.231323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.231352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.231479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.231687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.231711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.231859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.232026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.232054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.232220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.232365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.232390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.232565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.232681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.232705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.232898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.233059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.233085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.233199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.233347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.233371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.233563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.233754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.233777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.233950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.234096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.234121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.234267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.234388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.234413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.234560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.234716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.234745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.234922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.235030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.235054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.235237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.235403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.235439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.235588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.235737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.235762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.235891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.236015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.236042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.236165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.236333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.236361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.236539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.236682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.236706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.236849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.236984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.237010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.237160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.237356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.237380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.237557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.237740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.237764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.237922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.238073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.238115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.238285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.238454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.238481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.238642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.238837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.238883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.997 qpair failed and we were unable to recover it. 00:26:52.997 [2024-07-13 06:21:59.239366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.239538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.997 [2024-07-13 06:21:59.239564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.239714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.239882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.239910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.240065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.240255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.240280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.240451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.240629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.240670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.240806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.240972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.240996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.241139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.241306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.241331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.241491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.241630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.241654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.241797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.241959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.241987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.242181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.242344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.242369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.242513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.242628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.242652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.242798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.242968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.242995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.243166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.243338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.243363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.243485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.243633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.243657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.243828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.243997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.244025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.244200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.244322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.244345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.244492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.244662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.244686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.244835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.244996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.245021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.245152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.245317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.245344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.245514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.245633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.245656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.245863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.246004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.246032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.246156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.246348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.246375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.246552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.246690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.246731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.246888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.247015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.247042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.247234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.247400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.247424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.247568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.247737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.247760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.247935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.248099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.248127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.248297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.248483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.248509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.248655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.248815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.248842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.249010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.249152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.249181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.249389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.249514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.249541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.249681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.249849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.249900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.998 qpair failed and we were unable to recover it. 00:26:52.998 [2024-07-13 06:21:59.250034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.998 [2024-07-13 06:21:59.250200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.250228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.250414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.250543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.250569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.250732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.250878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.250920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.251086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.251229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.251252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.251395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.251571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.251597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.251760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.251911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.251953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.252115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.252265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.252289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.252407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.252570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.252596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.252747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.252920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.252964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.253093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.253239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.253266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.253426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.253616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.253640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.253799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.253930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.253956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.254130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.254310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.254355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.254508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.254672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.254700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.254884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.255009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.255049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.255224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.255365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.255389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.255577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.255736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.255763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.255928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.256062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.256087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.256236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.256383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.256407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.256553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.256723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.256750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.256921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.257047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.257072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.257187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.257354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.257381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.257579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.257744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.257768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.257922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.258033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.258058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.258179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.258323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.258349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.258524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.258684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.258711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.258846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.258978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.259002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.259144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.259286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.259310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.259436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.259555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.259579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.259692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.259833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.259858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.260019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.260146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.260174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:52.999 qpair failed and we were unable to recover it. 00:26:52.999 [2024-07-13 06:21:59.260332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.999 [2024-07-13 06:21:59.260526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.260550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.260721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.260839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.260864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.261022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.261152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.261180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.261389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.261502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.261526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.261643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.261789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.261813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.261952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.262070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.262094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.262268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.262454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.262482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.262624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.262749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.262773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.262932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.263057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.263081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.263213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.263352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.263378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.263571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.263692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.263717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.263871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.264024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.264049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.264193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.264381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.264407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.264556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.264728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.264753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.264994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.265105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.265128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.265300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.265434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.265461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.265647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.265838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.265872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.266014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.266185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.266213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.266383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.266548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.266575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.266743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.266925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.266950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.267098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.267294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.267321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.267469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.267618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.267646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.267812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.267944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.267970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.268118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.268266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.268290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.000 [2024-07-13 06:21:59.268582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.268777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.000 [2024-07-13 06:21:59.268804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.000 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.268973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.269124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.269149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.269319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.269473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.269500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.269659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.269817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.269843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.270030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.270198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.270225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.270410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.270575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.270599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.270779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.270932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.270958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.271107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.271252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.271294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.271426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.271553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.271579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.271709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.271877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.271920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.272068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.272214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.272238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.272381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.272573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.272600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.272731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.272941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.272966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.273088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.273259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.273300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.273446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.273569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.273596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.273786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.273961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.273986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.274100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.274212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.274237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.274420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.274574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.274602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.274726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.274926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.274955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.275087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.275225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.275250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.275420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.275573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.275600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.275768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.275924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.275953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.276102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.276267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.276310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.276470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.276654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.276681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.276812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.276981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.277007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.277176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.277329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.277356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.277539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.277679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.277703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.277894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.278085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.278110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.278256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.278399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.278440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.278569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.278721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.278749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.278935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.279077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.279102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.001 qpair failed and we were unable to recover it. 00:26:53.001 [2024-07-13 06:21:59.279225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.279343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.001 [2024-07-13 06:21:59.279367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.279563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.279747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.279774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.279959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.280071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.280095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.280267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.280439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.280466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.280659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.280843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.280944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.281105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.281278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.281319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.281463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.281633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.281657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.281802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.281947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.281973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.282127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.282297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.282322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.282493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.282638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.282663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.282873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.283016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.283040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.283160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.283300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.283328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.283514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.283701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.283729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.283893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.284030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.284059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.284211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.284386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.284410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.284558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.284701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.284725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.284875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.285020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.285044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.285190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.285325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.285352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.285490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.285613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.285636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.285797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.285973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.285998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.286147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.286310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.286337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.286470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.286636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.286660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.286836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.286997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.287025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.287214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.287347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.287380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.287572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.287711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.287752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.287881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.288038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.288066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.288228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.288357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.288383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.288576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.288720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.288761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.288927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.289109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.289136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.289307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.289489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.289516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.289681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.289827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.289852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.002 qpair failed and we were unable to recover it. 00:26:53.002 [2024-07-13 06:21:59.290036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.002 [2024-07-13 06:21:59.290196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.290222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.290413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.290571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.290598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.290760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.290916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.290958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.291097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.291227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.291254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.291388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.291515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.291542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.291685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.291806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.291830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.292018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.292165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.292207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.292395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.292548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.292576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.292733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.292885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.292912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.293077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.293278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.293305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.293499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.293669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.293710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.293884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.294008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.294032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.294149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.294320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.294347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.294535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.294659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.294685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.294828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.294952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.294977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.295138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.295321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.295347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.295512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.295637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.295666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.295870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.296008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.296035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.296222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.296355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.296383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.296538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.296705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.296733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.296905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.297096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.297123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.297258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.297379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.297407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.297572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.297709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.297734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.297879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.298002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.298027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.298156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.298315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.298342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.298506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.298662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.298689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.298857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.299009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.299034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.299149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.299286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.299311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.299475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.299608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.299634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.299823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.299949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.299974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.300121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.300262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.300286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.003 qpair failed and we were unable to recover it. 00:26:53.003 [2024-07-13 06:21:59.300429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.003 [2024-07-13 06:21:59.300570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.300595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.300739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.300943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.300968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.301088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.301236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.301260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.301459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.301618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.301645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.301787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.301932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.301957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.302101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.302270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.302297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.302480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.302615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.302642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.302807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.302954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.302994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.303180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.303337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.303364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.303530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.303686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.303714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.303853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.304008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.304033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.304177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.304341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.304368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.304531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.304664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.304699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.304871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.304986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.305011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.305198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.305315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.305339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.305453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.305626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.305650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.305770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.305892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.305917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.306038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.306206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.306231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.306377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.306488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.306512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.306691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.306845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.306882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.307042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.307212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.307236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.307412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.307555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.307579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.307728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.307900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.307925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.308073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.308193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.308217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.308386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.308526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.308550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.308724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.308894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.308929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.004 qpair failed and we were unable to recover it. 00:26:53.004 [2024-07-13 06:21:59.309054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.004 [2024-07-13 06:21:59.309199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.309223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.309349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.309486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.309510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.309681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.309852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.309885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.310035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.310207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.310231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.310381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.310497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.310521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.310667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.310785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.310808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.310990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.311137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.311162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.311288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.311432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.311457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.311576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.311691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.311715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.311883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.312033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.312056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.312202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.312318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.312342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.312487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.312606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.312630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.312771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.312917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.312942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.313092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.313220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.313245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.313391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.313560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.313584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.313750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.313896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.313921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.314069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.314239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.314263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.314412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.314555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.314580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.314718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.314858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.314890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.315040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.315154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.315178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.315353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.315501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.315525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.315671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.315813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.315836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.315968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.316113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.316138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.316283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.316431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.316455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.316568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.316778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.316802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.316953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.317064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.317089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.317241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.317388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.317413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.317561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.317689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.317716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.317884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.318042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.318067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.318234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.318380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.318404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.318552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.318704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.318728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.005 qpair failed and we were unable to recover it. 00:26:53.005 [2024-07-13 06:21:59.318899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.005 [2024-07-13 06:21:59.319017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.319041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.319194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.319336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.319360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.319507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.319650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.319674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.319784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.319900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.319926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.320077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.320218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.320242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.320413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.320556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.320581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.320726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.320839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.320882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.321028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.321182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.321206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.321355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.321472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.321497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.321642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.321786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.321811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.321922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.322059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.322083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.322231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.322371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.322395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.322509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.322654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.322680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.322837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.323028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.323054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.323203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.323383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.323415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.323598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.323760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.323788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.323941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.324079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.324104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.324256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.324375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.324399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.324537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.324683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.324707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.324844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.324990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.325015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.325124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.325273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.325297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.325447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.325584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.325608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.325756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.325903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.325929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.326073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.326221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.326246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.326416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.326543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.326567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.326711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.326884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.326909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.327056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.327194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.327218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.327367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.327515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.327539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.327687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.327824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.327851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.328016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.328155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.328179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.328324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.328463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.006 [2024-07-13 06:21:59.328487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.006 qpair failed and we were unable to recover it. 00:26:53.006 [2024-07-13 06:21:59.328613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.328783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.328807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.328964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.329139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.329167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.329393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.329620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.329671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.329828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.330016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.330044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.330198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.330376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.330403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.330582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.330760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.330788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.330996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.331146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.331174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.331379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.331630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.331681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.331877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.332008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.332031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.332189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.332309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.332334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.332449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.332592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.332616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.332738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.332884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.332909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.333029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.333139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.333163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.333335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.333453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.333478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.333604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.333750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.333775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.333881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.333997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.334022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.334142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.334288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.334312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.334455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.334570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.334593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.334710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.334847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.334878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.335055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.335175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.335199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.335313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.335478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.335502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.335651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.335790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.335814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.335936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.336051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.336076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.336221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.336388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.336411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.336551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.336669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.336693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.336840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.336989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.337014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.337185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.337325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.337354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.337510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.337690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.337713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.337877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.338024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.338049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.338199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.338368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.338392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.338534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.338644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.007 [2024-07-13 06:21:59.338668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.007 qpair failed and we were unable to recover it. 00:26:53.007 [2024-07-13 06:21:59.338847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.338973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.338998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.339150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.339321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.339346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.339515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.339672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.339696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.339807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.339972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.339998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.340146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.340287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.340311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.340487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.340630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.340658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.340853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.340987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.341012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.341157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.341301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.341326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.341473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.341608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.341632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.341806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.341919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.341944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.342060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.342235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.342259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.342431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.342575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.342600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.342745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.342890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.342915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.343033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.343160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.343183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.343337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.343509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.343533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.343702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.343819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.343842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.344000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.344144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.344169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.344319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.344483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.344507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.344660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.344801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.344825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.344950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.345119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.345144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.345284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.345453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.345478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.345599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.345772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.345797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.345953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.346101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.346124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.346265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.346385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.346409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.346529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.346671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.346695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.346835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.346987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.347013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.347162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.347327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.347354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.347537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.347739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.347765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.347946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.348150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.348177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.348323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.348480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.348505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.348679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.348832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.008 [2024-07-13 06:21:59.348856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.008 qpair failed and we were unable to recover it. 00:26:53.008 [2024-07-13 06:21:59.349037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.349204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.349227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.349346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.349466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.349492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.349662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.349806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.349830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.349998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.350147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.350171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.350340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.350456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.350480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.350591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.350709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.350733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.350880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.351022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.351047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.351193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.351336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.351378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.351511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.351670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.351697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.351876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.352020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.352044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.352194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.352396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.352421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.352537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.352705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.352729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.352877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.353030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.353058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.353261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.353405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.353429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.353577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.353760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.353787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.353928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.354077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.354102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.354246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.354402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.354429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.354615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.354775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.354801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.354968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.355129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.355156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.355318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.355483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.355507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.355689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.355877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.355905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.356036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.356169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.356197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.356389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.356532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.356556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.356736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.356884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.356926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.357112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.357269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.357297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.009 [2024-07-13 06:21:59.357458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.357591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.009 [2024-07-13 06:21:59.357622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.009 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.357788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.357966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.357991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.358119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.358243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.358267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.358452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.358620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.358645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.358793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.358922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.358951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.359113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.359282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.359307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.359476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.359664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.359690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.359845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.360034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.360062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.360221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.360392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.360416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.360531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.360724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.360751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.360890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.361031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.361056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.361233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.361408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.361433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.361578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.361739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.361763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.361914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.362101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.362126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.362244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.362353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.362378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.362532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.362706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.362731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.362889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.363088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.363116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.363316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.363430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.363455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.363621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.363761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.363785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.363923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.364087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.364114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.364253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.364403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.364430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.364590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.364779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.364806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.364988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.365128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.365152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.365327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.365478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.365505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.365662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.365822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.365849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.366001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.366163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.366188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.366308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.366454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.366479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.366686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.366801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.366826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.366973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.367144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.367171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.367340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.367521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.367546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.367686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.367804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.367828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.010 [2024-07-13 06:21:59.368014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.368154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.010 [2024-07-13 06:21:59.368196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.010 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.368324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.368473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.368501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.368661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.368854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.368887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.369033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.369179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.369204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.369326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.369476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.369504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.369657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.369828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.369853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.370067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.370189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.370216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.370378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.370519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.370544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.370726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.370894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.370935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.371129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.371263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.371287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.371410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.371559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.371587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.371780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.371967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.371994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.372188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.372347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.372375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.372557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.372726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.372750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.372909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.373078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.373107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.373249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.373397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.373422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.373546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.373717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.373740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.373890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.374006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.374030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.374175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.374337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.374364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.374525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.374690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.374714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.374917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.375110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.375139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.375285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.375429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.375453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.375621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.375805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.375832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.376016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.376163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.376188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.376303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.376472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.376497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.376688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.376845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.376877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.377043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.377184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.377213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.377406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.377579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.377606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.377763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.377925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.377952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.378111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.378270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.378297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.378462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.378621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.378648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.011 qpair failed and we were unable to recover it. 00:26:53.011 [2024-07-13 06:21:59.378852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.011 [2024-07-13 06:21:59.379020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.379049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.379235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.379400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.379427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.379551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.379707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.379733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.379906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.380056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.380081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.380193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.380342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.380366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.380532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.380729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.380754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.380876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.381039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.381066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.381227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.381383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.381410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.381554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.381678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.381701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.381815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.381954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.381979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.382156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.382288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.382314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.382503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.382658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.382686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.382819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.382931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.382955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.383093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.383284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.383308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.383506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.383674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.383702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.383855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.384056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.384084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.384247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.384391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.384433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.384590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.384769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.384796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.384991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.385155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.385182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.385311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.385495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.385522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.385692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.385835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.385880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.386032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.386173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.386197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.386358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.386536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.386563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.386751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.386944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.386969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.387114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.387253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.387278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.387400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.387592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.387619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.387778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.387938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.387966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.388097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.388262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.388286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.388435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.388571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.388595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.388779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.388959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.388984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.389170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.389292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.389319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.012 qpair failed and we were unable to recover it. 00:26:53.012 [2024-07-13 06:21:59.389505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.012 [2024-07-13 06:21:59.389652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.389676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.389818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.389966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.389991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.390151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.390292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.390316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.390451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.390634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.390661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.390826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.391004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.391033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.391200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.391370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.391394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.391582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.391709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.391736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.391875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.392036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.392063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.392264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.392442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.392484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.392646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.392787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.392833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.393023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.393185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.393212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.393373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.393561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.393589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.393784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.393981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.394007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.394151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.394290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.394315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.394435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.394602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.394628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.394791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.394935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.394959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.395148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.395269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.395297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.395459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.395585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.395610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.395788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.395945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.395973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.396127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.396250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.396278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.396438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.396594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.396620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.396757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.396913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.396937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.397105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.397265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.397290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.397458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.397566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.397591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.397763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.397935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.397963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.398105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.398247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.398272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.013 qpair failed and we were unable to recover it. 00:26:53.013 [2024-07-13 06:21:59.398449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.013 [2024-07-13 06:21:59.398632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.398657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.398823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.398973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.398999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.399170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.399315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.399342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.399544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.399707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.399735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.399945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.400092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.400116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.400263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.400432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.400459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.400615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.400797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.400824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.400975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.401122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.401147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.401281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.401410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.401437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.401590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.401724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.401751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.401939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.402070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.402098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.402236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.402383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.402409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.402556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.402720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.402747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.402913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.403097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.403124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.403285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.403474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.403501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.403632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.403772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.403796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.403960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.404144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.404172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.404295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.404446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.404473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.404674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.404843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.404889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.405038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.405181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.405206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.405396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.405550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.405578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.405740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.405879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.405905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.406103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.406225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.406252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.406413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.406552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.406576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.406760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.406937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.406963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.407106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.407230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.407255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.407419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.407580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.407604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.407771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.407940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.407965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.408137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.408310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.408339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.408532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.408676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.408700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.408878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.409020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.409044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.014 qpair failed and we were unable to recover it. 00:26:53.014 [2024-07-13 06:21:59.409187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.014 [2024-07-13 06:21:59.409303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.409328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.409464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.409578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.409603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.409716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.409892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.409917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.410066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.410212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.410240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.410359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.410535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.410559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.410703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.410853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.410891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.411007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.411124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.411149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.411325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.411465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.411489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.411605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.411783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.411808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.411927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.412068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.412093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.412206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.412346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.412371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.412520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.412669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.412693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.412832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.412961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.412986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.413104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.413216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.413244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.413368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.413492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.413516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.413696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.413841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.413873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.414036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.414150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.414174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.414305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.414456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.414481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.414604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.414745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.414768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.414917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.415062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.415085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.415233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.415372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.415396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.415535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.415660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.415684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.415796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.415904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.415929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.416078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.416220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.416245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.416372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.416523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.416547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.416718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.416861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.416891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.417060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.417182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.417206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.417346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.417489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.417513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.417636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.417794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.417819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.417947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.418065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.418089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.418201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.418349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.418372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.015 qpair failed and we were unable to recover it. 00:26:53.015 [2024-07-13 06:21:59.418493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.015 [2024-07-13 06:21:59.418653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.418679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.418815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.418990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.419015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.419166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.419317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.419342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.419468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.419591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.419617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.419756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.419883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.419907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.420057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.420201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.420225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.420350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.420490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.420514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.420637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.420775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.420798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.420921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.421041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.421065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.421210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.421328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.421352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.421526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.421673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.421698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.421817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.421987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.422012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.422162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.422327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.422351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.422523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.422647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.422671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.422812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.422956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.422982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.423102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.423243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.423267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.423389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.423555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.423579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.423735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.423887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.423912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.424035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.424182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.424206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.424381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.424500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.424525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.424677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.424785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.424809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.424962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.425111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.425151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.425320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.425482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.425509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.425646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.425786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.425812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.425972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.426121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.426146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.426300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.426442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.426466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.426607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.426750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.426774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.426916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.427067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.427091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.427220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.427369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.427394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.427513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.427694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.427719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.427892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.428013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.428037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.016 qpair failed and we were unable to recover it. 00:26:53.016 [2024-07-13 06:21:59.428181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.016 [2024-07-13 06:21:59.428339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.428363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.428486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.428655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.428679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.428794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.428939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.428967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.429092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.429238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.429263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.429402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.429576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.429601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.429786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.429922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.429946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.430120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.430286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.430310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.430454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.430599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.430624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.430782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.430938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.430962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.431099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.431269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.431293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.431438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.431605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.431629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.431739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.431906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.431932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.432074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.432184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.432208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.432341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.432458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.432482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.432628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.432771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.432795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.432943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.433058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.433082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.433224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.433340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.433364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.433505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.433706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.433734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.433902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.434021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.434046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.434214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.434336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.434360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.434505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.434678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.434702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.434844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.435013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.435038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.435198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.435312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.435336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.435486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.435625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.435649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.435849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.435993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.436020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.436182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.436342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.436370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.017 qpair failed and we were unable to recover it. 00:26:53.017 [2024-07-13 06:21:59.436528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.436695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.017 [2024-07-13 06:21:59.436719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.436871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.437016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.437041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.437201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.437368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.437395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.437563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.437731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.437755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.437916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.438060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.438084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.438284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.438442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.438469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.438630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.438791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.438817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.439013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.439144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.439171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.439323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.439467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.439491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.439661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.439807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.439832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.439971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.440120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.440144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.440266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.440415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.440439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.440615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.440758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.440782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.440931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.441073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.441096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.441244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.441390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.441414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.441560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.441710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.441734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.441910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.442025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.442049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.442168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.442349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.442373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.442496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.442637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.442661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.442791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.442936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.442962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.443084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.443223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.443248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.443420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.443540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.443565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.443677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.443845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.443880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.444033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.444148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.444181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.444295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.444435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.444460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.444615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.444767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.444792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.444933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.445072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.445097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.445276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.445420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.445449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.445626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.445801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.445826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.445980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.446118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.446142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.446285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.446456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.446481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.018 qpair failed and we were unable to recover it. 00:26:53.018 [2024-07-13 06:21:59.446624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.446745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.018 [2024-07-13 06:21:59.446770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.446915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.447043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.447070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.447186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.447359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.447393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.447543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.447689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.447713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.447840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.447959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.447985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.448112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.448259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.448284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.448423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.448547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.448572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.448713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.448882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.448907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.449055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.449171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.449196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.449350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.449490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.449515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.449692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.449847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.449881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.450038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.450163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.450189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.450311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.450453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.450478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.450610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.450727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.450751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.450899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.451097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.451125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.451298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.451429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.451456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.451577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.451700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.451727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.451947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.452069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.452094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.452220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.452367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.452391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.452534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.452675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.452699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.452853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.453002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.453034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.453175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.453289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.453313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.453441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.453613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.453637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.453762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.453885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.453910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.454041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.454181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.454205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.454351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.454472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.454496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.454661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.454802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.454827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.454976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.455102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.455126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.455250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.455391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.455416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.455557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.455676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.455700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.455878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.456023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.456046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.019 qpair failed and we were unable to recover it. 00:26:53.019 [2024-07-13 06:21:59.456187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.456332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.019 [2024-07-13 06:21:59.456357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.456527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.456673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.456699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.456878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.457026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.457049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.457234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.457374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.457398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.457529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.457645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.457669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.457819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.457976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.458001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.458129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.458286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.458310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.458466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.458597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.458624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.458779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.458943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.458971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.459156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.459313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.459340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.459501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.459651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.459675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.459798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.459941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.459967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.460091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.460212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.460255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.460391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.460551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.460578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.460774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.460915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.460940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.461083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.461199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.461223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.461345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.461498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.461526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.461676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.461825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.461850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.462001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.462114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.462139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.462258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.462400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.462424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.462564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.462714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.462737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.462850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.463001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.463026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.463145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.463291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.463315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.463467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.463604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.463631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.463783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.463982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.464008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.464157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.464315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.464340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.464486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.464598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.464626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.464767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.464935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.464963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.465113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.465274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.465302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.465436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.465605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.465630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.465769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.465915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.465939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.020 qpair failed and we were unable to recover it. 00:26:53.020 [2024-07-13 06:21:59.466106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.020 [2024-07-13 06:21:59.466252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.466276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.466425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.466592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.466617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.466800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.466947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.466972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.467094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.467238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.467261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.467403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.467551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.467576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.467747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.467891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.467917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.468095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.468214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.468239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.468430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.468546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.468571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.468706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.468825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.468849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.469007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.469126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.469151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.469296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.469447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.469471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.469591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.469731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.469754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.469914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.470058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.470083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.470235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.470379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.470403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.470549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.470679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.470703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.470891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.471048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.471073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.471239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.471357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.471381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.471545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.471663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.471687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.471837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.472014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.472038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.472162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.472302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.472325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.472463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.472603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.472626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.472742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.472911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.472936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.473061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.473202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.473227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.473396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.473537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.473561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.473731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.473857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.473889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.474062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.474172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.474197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.474350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.474495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.474520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.474687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.474834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.474858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.475019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.475140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.475165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.475290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.475407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.475432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.475621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.475778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.475805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.021 qpair failed and we were unable to recover it. 00:26:53.021 [2024-07-13 06:21:59.475989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.021 [2024-07-13 06:21:59.476113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.476137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.476250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.476366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.476390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.476574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.476706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.476730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.476930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.477061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.477087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.477255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.477395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.477420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.477559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.477702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.477726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.477848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.477996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.478021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.478201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.478324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.478348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.478496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.478644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.478668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.478809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.478965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.478990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.479114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.479260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.479284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.479459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.479602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.479626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.479767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.479941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.479967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.480101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.480298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.480326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.480488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.480649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.480675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.480846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.481017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.481061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.481206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.481372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.481396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.481578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.481725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.481749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.481874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.481998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.482022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.482182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.482329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.482354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.482504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.482621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.482645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.482807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.482926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.482951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.483074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.483220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.483243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.483387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.483506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.483532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.483712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.483829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.483855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.484007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.484149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.484173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.022 qpair failed and we were unable to recover it. 00:26:53.022 [2024-07-13 06:21:59.484357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.022 [2024-07-13 06:21:59.484542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.484568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.484761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.484982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.485010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.485171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.485318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.485342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.485490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.485619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.485644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.485787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.485905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.485931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.486057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.486193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.486218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.486358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.486514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.486537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.486714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.486823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.486846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.486968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.487113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.487137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.487280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.487427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.487455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.487615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.487770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.487796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.487966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.488120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.488146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.488320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.488479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.488508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.488669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.488872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.488900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.489057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.489198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.489230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.489372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.489538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.489565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.489706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.489821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.489850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.490017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.490149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.490175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.490325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.490445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.490469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.490685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.490821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.490849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.491007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.491182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.491220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.023 qpair failed and we were unable to recover it. 00:26:53.023 [2024-07-13 06:21:59.491395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.023 [2024-07-13 06:21:59.491572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.491597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.297 qpair failed and we were unable to recover it. 00:26:53.297 [2024-07-13 06:21:59.491766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.491927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.491957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.297 qpair failed and we were unable to recover it. 00:26:53.297 [2024-07-13 06:21:59.492103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.492229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.492263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.297 qpair failed and we were unable to recover it. 00:26:53.297 [2024-07-13 06:21:59.492431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.492621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.492656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.297 qpair failed and we were unable to recover it. 00:26:53.297 [2024-07-13 06:21:59.492812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.492956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.492994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.297 qpair failed and we were unable to recover it. 00:26:53.297 [2024-07-13 06:21:59.493137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.493301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.493334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.297 qpair failed and we were unable to recover it. 00:26:53.297 [2024-07-13 06:21:59.493496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.493646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.493682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.297 qpair failed and we were unable to recover it. 00:26:53.297 [2024-07-13 06:21:59.493837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.494023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.494056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.297 qpair failed and we were unable to recover it. 00:26:53.297 [2024-07-13 06:21:59.494196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.494352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.494378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.297 qpair failed and we were unable to recover it. 00:26:53.297 [2024-07-13 06:21:59.494499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.494626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.494651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.297 qpair failed and we were unable to recover it. 00:26:53.297 [2024-07-13 06:21:59.494776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.494923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.494948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.297 qpair failed and we were unable to recover it. 00:26:53.297 [2024-07-13 06:21:59.495093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.495214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.495238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.297 qpair failed and we were unable to recover it. 00:26:53.297 [2024-07-13 06:21:59.495395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.495516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.495556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.297 qpair failed and we were unable to recover it. 00:26:53.297 [2024-07-13 06:21:59.495714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.495863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.495897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.297 qpair failed and we were unable to recover it. 00:26:53.297 [2024-07-13 06:21:59.496046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.496171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.496196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.297 qpair failed and we were unable to recover it. 00:26:53.297 [2024-07-13 06:21:59.496348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.496492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.496516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.297 qpair failed and we were unable to recover it. 00:26:53.297 [2024-07-13 06:21:59.496639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.496782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.496806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.297 qpair failed and we were unable to recover it. 00:26:53.297 [2024-07-13 06:21:59.496951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.497068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.297 [2024-07-13 06:21:59.497092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.297 qpair failed and we were unable to recover it. 00:26:53.297 [2024-07-13 06:21:59.497262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.497383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.497408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.497552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.497758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.497790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.497963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.498122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.498149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.498304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.498418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.498442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.498586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.498737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.498761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.498910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.499088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.499112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.499256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.499405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.499429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.499576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.499720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.499745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.499917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.500041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.500064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.500179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.500347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.500370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.500518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.500689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.500713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.500885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.501008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.501032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.501177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.501301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.501326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.501506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.501646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.501671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.501797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.501975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.502001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.502151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.502265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.502289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.502435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.502551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.502575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.502693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.502890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.502915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.503073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.503245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.503269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.503441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.503578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.503602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.503760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.503925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.503951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.298 qpair failed and we were unable to recover it. 00:26:53.298 [2024-07-13 06:21:59.504082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.504199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.298 [2024-07-13 06:21:59.504224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.504347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.504523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.504548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.504657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.504770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.504794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.504924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.505100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.505124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.505289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.505403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.505428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.505616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.505761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.505785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.505930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.506071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.506096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.506250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.506389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.506413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.506570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.506715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.506739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.506887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.507025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.507053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.507215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.507355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.507378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.507530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.507705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.507729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.507893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.508051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.508077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.508207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.508392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.508419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.508577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.508746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.508772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.508919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.509129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.509153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.509320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.509498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.509522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.509688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.509829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.509854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.509997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.510170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.510194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.510337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.510637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.510688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.510854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.511055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.511080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.511251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.511376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.511400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.511570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.511741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.511768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.511938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.512102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.512129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.512279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.512448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.512473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.512635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.512804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.512828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.512952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.513095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.513120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.299 qpair failed and we were unable to recover it. 00:26:53.299 [2024-07-13 06:21:59.513304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.513450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.299 [2024-07-13 06:21:59.513474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.513649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.513795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.513822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.514020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.514215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.514264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.514421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.514583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.514610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.514765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.514949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.514979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.515116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.515273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.515299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.515473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.515632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.515659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.515861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.516013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.516037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.516152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.516295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.516320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.516485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.516637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.516665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.516828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.517003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.517028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.517176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.517377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.517456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.517764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.517979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.518007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.518265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.518523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.518547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.518665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.518821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.518890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.519061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.519198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.519222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.519395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.519716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.519775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.519933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.520153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.520178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.520346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.520466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.520491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.520636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.520833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.520860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.521009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.521184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.521252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.521509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.521724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.521748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.521938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.522107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.522132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.522305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.522425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.522453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.522616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.522773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.522800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.522990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.523136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.523160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.523299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.523446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.523470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.523616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.523797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.523824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.524002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.524167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.524196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.524506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.524722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.524750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.300 [2024-07-13 06:21:59.524927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.525073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.300 [2024-07-13 06:21:59.525115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.300 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.525272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.525415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.525439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.525580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.525728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.525752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.525875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.526041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.526068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.526233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.526402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.526444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.526604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.526785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.526812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.526938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.527076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.527103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.527234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.527417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.527444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.527617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.527754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.527778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.527921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.528129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.528153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.528275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.528424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.528451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.528614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.528766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.528793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.528919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.529056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.529081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.529289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.529437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.529464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.529625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.529764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.529791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.529927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.530087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.530114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.530287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.530452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.530493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.530689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.530821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.530849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.531007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.531172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.531197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.531321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.531454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.531482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.531670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.531786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.531810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.531983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.532178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.532205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.532379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.532493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.532518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.532686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.532816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.532843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.533013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.533121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.533146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.533305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.533495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.533551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.533715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.533925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.533950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.534141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.534277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.534304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.534471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.534581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.534605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.534746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.534887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.534929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.535048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.535169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.535206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.301 qpair failed and we were unable to recover it. 00:26:53.301 [2024-07-13 06:21:59.535352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.535497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.301 [2024-07-13 06:21:59.535522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.535667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.535790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.535815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.536062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.536239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.536269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.536434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.536610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.536637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.536786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.536897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.536933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.537078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.537226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.537250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.537369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.537521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.537545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.537686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.537879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.537907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.538096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.538293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.538318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.538459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.538602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.538626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.538790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.538941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.538969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.539101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.539253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.539280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.539444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.539598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.539625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.539757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.539927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.539952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.540153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.540274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.540298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.540465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.540624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.540651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.540815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.540956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.540984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.541117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.541241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.541265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.541414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.541548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.541576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.541705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.541896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.541924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.542096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.542266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.542290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.542403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.542523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.542547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.542693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.542898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.542924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.543096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.543234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.543261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.543423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.543608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.543635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.302 qpair failed and we were unable to recover it. 00:26:53.302 [2024-07-13 06:21:59.543812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.543959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.302 [2024-07-13 06:21:59.543984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.544133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.544260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.544285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.544444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.544579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.544606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.544811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.544954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.544980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.545124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.545277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.545317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.545480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.545638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.545665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.545833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.545991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.546019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.546216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.546421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.546445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.546587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.546730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.546769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.546923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.547109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.547136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.547261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.547421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.547448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.547636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.547821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.547848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.548005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.548130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.548155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.548269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.548412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.548437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.548606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.548760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.548787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.548933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.549116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.549144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.549334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.549458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.549484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.549599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.549761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.549788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.549948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.550077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.550105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.550276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.550451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.550490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.550661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.550805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.550846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.551034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.551153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.551179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.551382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.551499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.551523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.551670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.551810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.551837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.552033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.552223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.552251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.552435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.552597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.552625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.552789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.553005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.553033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.553275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.553483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.553534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.553735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.553900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.553929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.554091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.554271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.554298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.554461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.554596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.554628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.303 qpair failed and we were unable to recover it. 00:26:53.303 [2024-07-13 06:21:59.554791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.303 [2024-07-13 06:21:59.554943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.554971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.555146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.555289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.555313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.555483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.555614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.555641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.555818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.556001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.556026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.556153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.556318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.556346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.556513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.556660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.556685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.556856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.557025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.557051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.557252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.557435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.557460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.557581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.557720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.557745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.557889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.558008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.558032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.558222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.558348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.558375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.558534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.558704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.558729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.558898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.559024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.559051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.559242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.559362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.559386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.559528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.559690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.559717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.559903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.560067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.560091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.560208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.560372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.560399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.560565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.560745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.560784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.560924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.561108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.561135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.561268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.561400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.561426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.561604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.561771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.561796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.561995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.562143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.562167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.562355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.562510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.562538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.562659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.562793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.562820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.563005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.563162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.563189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.563362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.563535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.563576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.563742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.563898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.563925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.564074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.564232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.564259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.564396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.564552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.564580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.564741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.564887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.564912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.565063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.565225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.304 [2024-07-13 06:21:59.565252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.304 qpair failed and we were unable to recover it. 00:26:53.304 [2024-07-13 06:21:59.565378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.565515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.565541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.565716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.565883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.565909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.566066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.566212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.566235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.566366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.566499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.566526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.566708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.566841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.566880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.567045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.567179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.567207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.567368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.567485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.567509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.567661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.567828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.567854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.568029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.568301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.568351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.568580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.568737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.568765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.568928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.569054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.569078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.569228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.569408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.569435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.569577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.569773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.569798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.569950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.570157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.570182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.570349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.570462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.570504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.570638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.570792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.570818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.570966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.571087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.571111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.571284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.571439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.571466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.571627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.571773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.571798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.571949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.572119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.572150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.572346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.572496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.572536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.572698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.572872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.572899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.573067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.573189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.573212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.573380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.573531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.573559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.573719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.573879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.573907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.574067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.574239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.574263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.574466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.574661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.574689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.574847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.575045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.575072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.575231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.575412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.575439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.575570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.575753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.575780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.575918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.576060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.305 [2024-07-13 06:21:59.576084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.305 qpair failed and we were unable to recover it. 00:26:53.305 [2024-07-13 06:21:59.576253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.576449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.576473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.576648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.576784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.576811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.577006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.577136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.577163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.577327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.577474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.577498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.577692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.577851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.577890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.578081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.578217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.578243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.578383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.578564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.578591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.578758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.578907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.578931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.579077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.579235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.579262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.579388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.579512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.579538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.579738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.579906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.579930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.580084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.580235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.580260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.580408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.580558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.580586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.580773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.580915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.580940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.581116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.581296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.581324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.581487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.581604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.581627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.581801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.581951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.581976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.582103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.582285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.582312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.582498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.582641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.582668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.582871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.582990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.583014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.583187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.583318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.583346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.583476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.583639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.583666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.583804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.583948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.583975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.584143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.584289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.584313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.584461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.584570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.584594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.584735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.584883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.306 [2024-07-13 06:21:59.584912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.306 qpair failed and we were unable to recover it. 00:26:53.306 [2024-07-13 06:21:59.585113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.585279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.585303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.585454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.585574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.585597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.585803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.585944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.585969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.586087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.586254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.586282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.586438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.586643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.586668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.586838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.586997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.587023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.587196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.587350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.587378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.587507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.587694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.587721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.587912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.588064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.588091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.588260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.588380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.588404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.588551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.588681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.588708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.588872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.589028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.589055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.589217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.589366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.589394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.589533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.589708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.589736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.589919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.590073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.590097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.590255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.590402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.590429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.590596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.590751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.590778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.590949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.591066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.591091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.591224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.591378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.591405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.591563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.591691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.591720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.591887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.592012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.592039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.592203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.592342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.592366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.592508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.592638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.592667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.592833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.592972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.593005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.593168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.593317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.593344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.593477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.593622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.593646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.593805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.593925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.593951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.594116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.594270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.594297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.307 qpair failed and we were unable to recover it. 00:26:53.307 [2024-07-13 06:21:59.594549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.594730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.307 [2024-07-13 06:21:59.594757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.594932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.595076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.595101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.595227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.595347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.595373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.595548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.595677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.595704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.595847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.596012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.596040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.596186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.596313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.596338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.596494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.596679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.596706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.596893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.597043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.597071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.597249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.597391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.597416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.597553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.597695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.597719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.597898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.598059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.598087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.598242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.598406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.598432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.598620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.598784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.598811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.598989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.599136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.599160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.599327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.599476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.599500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.599681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.599845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.599876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.600038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.600198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.600224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.600366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.600541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.600585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.600773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.600905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.600932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.601106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.601258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.601282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.601432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.601577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.601602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.601751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.601903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.601928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.602089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.602246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.602272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.602469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.602590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.602614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.602814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.603000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.603027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.603218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.603330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.603354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.603506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.603662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.603690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.603849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.604018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.604043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.604205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.604363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.604389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.604585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.604704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.604745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.604907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.605070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.605097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.308 qpair failed and we were unable to recover it. 00:26:53.308 [2024-07-13 06:21:59.605257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.308 [2024-07-13 06:21:59.605419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.605445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.605605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.605734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.605760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.605929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.606073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.606114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.606298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.606425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.606451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.606584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.606749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.606776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.606941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.607088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.607113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.607231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.607398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.607422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.607619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.607766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.607793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.607959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.608148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.608175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.608367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.608527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.608554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.608746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.608906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.608932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.609074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.609218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.609242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.609387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.609532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.609555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.609762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.609911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.609937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.610112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.610277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.610303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.610488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.610612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.610644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.610780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.610956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.610981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.611132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.611299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.611327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.611488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.611607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.611631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.611824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.611974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.611999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.612146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.612269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.612293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.612411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.612605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.612631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.612802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.612923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.612949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.613094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.613262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.613289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.613424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.613594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.613621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.613760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.613913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.613941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.614089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.614258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.614283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.614449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.614583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.614610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.614796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.614959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.614986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.615161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.615306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.615330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.615540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.615727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.309 [2024-07-13 06:21:59.615754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.309 qpair failed and we were unable to recover it. 00:26:53.309 [2024-07-13 06:21:59.615910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.616103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.616130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.616292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.616447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.616474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.616607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.616761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.616787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.616983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.617102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.617142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.617328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.617481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.617508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.617645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.617778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.617805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.617945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.618105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.618133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.618300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.618487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.618514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.618643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.618773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.618800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.618953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.619110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.619137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.619292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.619452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.619480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.619649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.619822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.619847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.620029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.620188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.620214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.620388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.620533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.620557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.620726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.620850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.620886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.621062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.621247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.621273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.621449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.621567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.621591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.621736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.621857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.621889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.622036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.622197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.622224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.622383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.622533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.622557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.622727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.622864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.622907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.623027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.623172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.623197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.623330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.623507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.623532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.623690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.623832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.623855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.624016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.624186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.624211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.310 [2024-07-13 06:21:59.624360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.624483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.310 [2024-07-13 06:21:59.624507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.310 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.624669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.624828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.624854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.625050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.625170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.625195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.625311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.625453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.625478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.625629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.625795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.625822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.625956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.626126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.626152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.626347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.626531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.626583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.626775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.626912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.626937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.627081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.627193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.627217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.627392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.627536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.627561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.627686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.627823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.627851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.627986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.628097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.628121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.628266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.628408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.628432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.628555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.628725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.628752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.628890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.629045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.629070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.629219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.629365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.629389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.629535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.629670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.629696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.629840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.629990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.630016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.630168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.630338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.630362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.630513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.630663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.630687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.630889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.631038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.631063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.631183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.631328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.631353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.631526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.631676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.631700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.631879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.631988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.632012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.632137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.632276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.632299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.632450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.632598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.632622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.632794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.632942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.632967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.633135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.633306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.633330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.633484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.633626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.633651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.633810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.633952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.633977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.634125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.634299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.634324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.311 qpair failed and we were unable to recover it. 00:26:53.311 [2024-07-13 06:21:59.634478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.634601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.311 [2024-07-13 06:21:59.634626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.634780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.634932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.634958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.635079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.635243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.635268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.635383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.635505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.635530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.635700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.635875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.635899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.636017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.636138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.636181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.636314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.636499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.636527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.636672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.636787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.636812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.636990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.637162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.637202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.637336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.637495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.637523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.637723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.637854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.637885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.638005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.638144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.638168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.638351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.638493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.638517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.638685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.638888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.638914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.639037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.639206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.639230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.639416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.639565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.639607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.639773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.639913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.639938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.640109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.640305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.640331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.640492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.640655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.640683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.640844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.640989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.641013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.641125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.641330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.641357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.641517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.641682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.641710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.641877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.642058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.642085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.642281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.642429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.642453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.642597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.642784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.642824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.642985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.643147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.643174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.643310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.643496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.643523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.643777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.643927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.643952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.644109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.644279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.644306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.644441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.644597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.644624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.644791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.644958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.644986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.312 qpair failed and we were unable to recover it. 00:26:53.312 [2024-07-13 06:21:59.645130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.312 [2024-07-13 06:21:59.645314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.645341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.645502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.645633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.645660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.645831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.646011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.646052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.646208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.646369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.646397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.646591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.646730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.646754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.646873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.647067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.647094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.647288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.647420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.647448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.647601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.647728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.647755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.647895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.648036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.648060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.648227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.648382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.648414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.648602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.648728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.648752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.648920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.649053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.649081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.649229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.649371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.649396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.649583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.649707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.649735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.649919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.650076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.650103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.650293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.650450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.650477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.650635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.650822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.650849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.651032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.651189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.651216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.651367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.651529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.651556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.651714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.651906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.651943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.652122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.652260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.652284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.652464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.652598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.652625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.652813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.652976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.653004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.653161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.653327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.653352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.653463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.653607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.653632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.653776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.653906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.653934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.654091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.654253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.654280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.654418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.654604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.654631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.654792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.654966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.655007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.655179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.655305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.655329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.655531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.655687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.655715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.313 qpair failed and we were unable to recover it. 00:26:53.313 [2024-07-13 06:21:59.655879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.313 [2024-07-13 06:21:59.656068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.656096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.656287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.656430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.656454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.656591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.656788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.656815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.656965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.657137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.657162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.657339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.657516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.657541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.657657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.657803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.657828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.658013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.658161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.658185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.658296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.658463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.658489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.658676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.658837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.658873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.659017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.659193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.659218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.659385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.659539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.659565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.659726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.659887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.659916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.660072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.660244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.660267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.660439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.660551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.660575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.660742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.660887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.660912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.661056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.661171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.661213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.661384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.661541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.661568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.661723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.661909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.661937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.662110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.662250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.662277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.662443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.662609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.662636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.662802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.662990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.663018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.663186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.663354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.663377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.663499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.663623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.663647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.663816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.663958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.663983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.664133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.664385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.664444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.664635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.664754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.664780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.664960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.665112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.665137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.665263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.665401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.665425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.665551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.665694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.665719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.665839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.665989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.666018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.666225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.666377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.666416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.314 qpair failed and we were unable to recover it. 00:26:53.314 [2024-07-13 06:21:59.666593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.314 [2024-07-13 06:21:59.666707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.666731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.666905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.667051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.667075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.667225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.667338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.667361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.667479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.667596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.667620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.667768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.667888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.667913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.668069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.668186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.668210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.668382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.668523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.668548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.668670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.668840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.668864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.668991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.669139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.669163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.669337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.669466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.669495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.669643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.669780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.669805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.669953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.670070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.670095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.670245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.670357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.670381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.670555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.670731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.670758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.670929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.671074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.671113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.671267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.671452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.671478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.671635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.671793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.671820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.672007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.672157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.672181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.672353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.672532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.672556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.672707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.672823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.672847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.673000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.673231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.673276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.673501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.673708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.673735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.673872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.674018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.674041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.674186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.674373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.674400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.674567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.674742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.674769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.674959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.675158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.675182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.315 qpair failed and we were unable to recover it. 00:26:53.315 [2024-07-13 06:21:59.675324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.315 [2024-07-13 06:21:59.675466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.675490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.675664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.675847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.675881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.676018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.676189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.676213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.676360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.676480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.676505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.676628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.676772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.676797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.676944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.677086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.677110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.677256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.677451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.677477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.677644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.677785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.677810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.677983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.678127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.678150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.678300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.678436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.678460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.678579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.678720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.678744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.678916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.679047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.679074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.679265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.679383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.679407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.679578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.679740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.679764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.679909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.680061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.680085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.680204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.680342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.680366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.680513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.680672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.680709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.680876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.681044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.681068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.681184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.681387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.681411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.681557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.681698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.681722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.681842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.681977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.682002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.682152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.682334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.682361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.682536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.682656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.682679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.682828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.682981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.683011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.683188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.683367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.683392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.683531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.683677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.683701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.683851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.684003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.684030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.684191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.684312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.684339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.684509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.684656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.684680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.684826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.684952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.684976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.685120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.685292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.685331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.316 qpair failed and we were unable to recover it. 00:26:53.316 [2024-07-13 06:21:59.685495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.316 [2024-07-13 06:21:59.685651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.685676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.685845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.685960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.685985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.686103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.686259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.686284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.686469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.686619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.686647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.686804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.686969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.686997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.687153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.687291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.687314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.687462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.687598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.687623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.687738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.687936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.687964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.688122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.688312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.688339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.688507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.688680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.688703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.688850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.689010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.689034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.689156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.689297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.689322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.689491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.689638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.689663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.689837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.689991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.690016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.690164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.690327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.690354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.690511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.690673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.690700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.690872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.691061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.691089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.691285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.691429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.691453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.691594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.691734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.691758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.691888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.692008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.692034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.692184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.692297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.692322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.692436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.692554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.692579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.692719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.692842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.692872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.693050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.693219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.693244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.693425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.693538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.693563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.693703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.693840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.693870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.694013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.694128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.694156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.694343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.694478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.694502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.694648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.694758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.694783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.694925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.695039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.695066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.695269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.695460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.317 [2024-07-13 06:21:59.695487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.317 qpair failed and we were unable to recover it. 00:26:53.317 [2024-07-13 06:21:59.695646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.695768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.695798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.695961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.696131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.696155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.696299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.696426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.696451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.696596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.696738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.696762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.696893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.697047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.697071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.697197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.697367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.697391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.697501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.697670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.697710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.697875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.698013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.698041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.698227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.698386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.698413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.698575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.698736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.698765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.698929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.699050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.699075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.699222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.699370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.699397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.699565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.699729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.699758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.699935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.700096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.700123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.700282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.700430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.700454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.700597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.700742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.700767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.700914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.701027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.701052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.701192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.701379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.701406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.701570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.701683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.701708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.701904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.702065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.702092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.702277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.702398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.702425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.702553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.702732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.702760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.702919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.703038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.703064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.703210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.703355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.703379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.703525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.703682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.703709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.703876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.703999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.704026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.704194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.704365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.704389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.704536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.704658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.704684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.704803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.704956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.704982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.705127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.705287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.705312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.705432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.705570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.318 [2024-07-13 06:21:59.705594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.318 qpair failed and we were unable to recover it. 00:26:53.318 [2024-07-13 06:21:59.705740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.705860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.705893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.706007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.706132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.706157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.706310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.706457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.706482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.706640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.706802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.706826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.706980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.707106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.707130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.707279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.707424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.707448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.707587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.707750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.707775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.707950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.708099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.708124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.708274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.708387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.708412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.708559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.708708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.708732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.708887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.709061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.709086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.709255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.709416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.709440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.709558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.709729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.709753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.709893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.710017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.710042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.710190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.710302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.710326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.710467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.710651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.710675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.710782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.710929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.710954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.711099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.711216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.711240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.711362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.711496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.711523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.711747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.711878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.711903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.712051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.712178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.712202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.712351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.712497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.712521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.712644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.712821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.712846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.713006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.713177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.713202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.713327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.713490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.713518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.713699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.713880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.713906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.319 qpair failed and we were unable to recover it. 00:26:53.319 [2024-07-13 06:21:59.714024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.319 [2024-07-13 06:21:59.714170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.714194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.714374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.714518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.714542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.714661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.714767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.714792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.714913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.715058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.715082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.715203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.715345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.715370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.715543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.715666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.715691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.715801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.715948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.715977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.716161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.716277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.716302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.716484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.716625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.716649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.716767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.716935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.716960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.717112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.717247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.717272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.717449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.717618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.717642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.717788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.717896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.717921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.718068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.718235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.718259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.718403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.718572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.718597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.718736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.718856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.718889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.719040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.719180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.719209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.719352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.719496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.719520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.719669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.719791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.719816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.719983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.720093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.720117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.720294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.720440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.720466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.720606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.720717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.720742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.720900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.721070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.721094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.721263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.721372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.721396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.721540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.721686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.721711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.721862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.722015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.722039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.722153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.722272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.722296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.722445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.722566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.722590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.722731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.722846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.722877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.723025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.723168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.723192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.723337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.723505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.320 [2024-07-13 06:21:59.723530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.320 qpair failed and we were unable to recover it. 00:26:53.320 [2024-07-13 06:21:59.723675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.723814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.723838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.723992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.724149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.724176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.724375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.724575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.724633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.724794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.725010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.725039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.725249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.725464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.725491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.725669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.725855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.725890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.726054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.726171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.726195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.726343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.726515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.726539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.726661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.726894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.726921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.727094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.727210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.727235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.727406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.727577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.727601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.727733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.727893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.727918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.728035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.728147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.728171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.728317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.728456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.728481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.728628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.728800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.728824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.729010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.729122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.729147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.729288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.729433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.729458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.729631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.729773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.729797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.729962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.730101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.730126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.730265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.730413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.730437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.730583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.730750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.730775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.730925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.731067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.731091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.731235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.731378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.731402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.731545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.731691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.731715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.731886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.732002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.732027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.732167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.732289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.732314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.732461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.732630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.732655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.732818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.732980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.733004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.733115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.733251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.733275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.733445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.733609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.733636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.733826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.734016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.321 [2024-07-13 06:21:59.734044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.321 qpair failed and we were unable to recover it. 00:26:53.321 [2024-07-13 06:21:59.734225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.734387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.734414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.734631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.734810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.734837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.735036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.735213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.735283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.735483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.735695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.735758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.735952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.736101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.736126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.736300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.736419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.736448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.736620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.736764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.736788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.736940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.737110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.737135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.737285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.737430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.737455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.737600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.737751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.737775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.737897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.738043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.738067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.738216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.738339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.738364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.738508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.738665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.738689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.738833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.738994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.739020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.739162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.739310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.739335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.739512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.739683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.739707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.739881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.740026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.740051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.740201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.740370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.740395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.740513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.740682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.740707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.740848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.741028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.741053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.741167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.741281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.741305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.741459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.741606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.741630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.741815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.741987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.742012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.742185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.742293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.742317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.742464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.742581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.742608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.742728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.742903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.742928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.743080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.743221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.743246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.743385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.743530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.743555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.743722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.743838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.743872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.744018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.744186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.744211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.322 [2024-07-13 06:21:59.744331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.744456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.322 [2024-07-13 06:21:59.744482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.322 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.744625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.744798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.744822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.744969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.745118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.745143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.745283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.745430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.745455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.745630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.745745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.745769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.745886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.746002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.746027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.746141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.746280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.746308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.746471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.746686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.746710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.746828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.746976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.747002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.747153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.747328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.747352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.747496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.747670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.747695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.747873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.747990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.748014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.748156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.748278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.748302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.748479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.748650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.748674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.748799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.748939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.748965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.749103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.749219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.749243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.749363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.749541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.749566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.749690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.749835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.749860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.750014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.750135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.750159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.750302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.750443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.750467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.750633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.750812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.750840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.751062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.751364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.751416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.751579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.751801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.751828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.752041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.752219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.752246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.323 qpair failed and we were unable to recover it. 00:26:53.323 [2024-07-13 06:21:59.752399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.752531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.323 [2024-07-13 06:21:59.752557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.324 qpair failed and we were unable to recover it. 00:26:53.324 [2024-07-13 06:21:59.752702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.752823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.752848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.324 qpair failed and we were unable to recover it. 00:26:53.324 [2024-07-13 06:21:59.752966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.753089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.753118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.324 qpair failed and we were unable to recover it. 00:26:53.324 [2024-07-13 06:21:59.753260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.753400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.753425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.324 qpair failed and we were unable to recover it. 00:26:53.324 [2024-07-13 06:21:59.753565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.753742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.753767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.324 qpair failed and we were unable to recover it. 00:26:53.324 [2024-07-13 06:21:59.753925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.754037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.754063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.324 qpair failed and we were unable to recover it. 00:26:53.324 [2024-07-13 06:21:59.754185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.754325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.754349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.324 qpair failed and we were unable to recover it. 00:26:53.324 [2024-07-13 06:21:59.754520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.754685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.754709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.324 qpair failed and we were unable to recover it. 00:26:53.324 [2024-07-13 06:21:59.754829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.755024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.755049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.324 qpair failed and we were unable to recover it. 00:26:53.324 [2024-07-13 06:21:59.755202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.755340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.755364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.324 qpair failed and we were unable to recover it. 00:26:53.324 [2024-07-13 06:21:59.755508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.755654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.755679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.324 qpair failed and we were unable to recover it. 00:26:53.324 [2024-07-13 06:21:59.755799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.755916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.755950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.324 qpair failed and we were unable to recover it. 00:26:53.324 [2024-07-13 06:21:59.756077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.756221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.756248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.324 qpair failed and we were unable to recover it. 00:26:53.324 [2024-07-13 06:21:59.756431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.756600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.756628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.324 qpair failed and we were unable to recover it. 00:26:53.324 [2024-07-13 06:21:59.756761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.756876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.756901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.324 qpair failed and we were unable to recover it. 00:26:53.324 [2024-07-13 06:21:59.757021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.757172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.757197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.324 qpair failed and we were unable to recover it. 00:26:53.324 [2024-07-13 06:21:59.757339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.757477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.324 [2024-07-13 06:21:59.757502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.324 qpair failed and we were unable to recover it. 00:26:53.324 [2024-07-13 06:21:59.757653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.757776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.757802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.757923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.758094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.758119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.758263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.758378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.758402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.758545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.758691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.758716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.758871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.758984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.759009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.759157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.759294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.759318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.759465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.759649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.759676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.759835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.759983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.760008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.760145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.760310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.760334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.760451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.760569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.760593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.760704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.760844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.760874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.761018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.761130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.761155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.761300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.761469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.761493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.761609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.761750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.761774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.761921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.762090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.762114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.762259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.762375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.762400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.762583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.762727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.762752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.762925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.763042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.763067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.763220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.763356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.763381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.763527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.763640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.763664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.763816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.763989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.764014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.764188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.764359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.764384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.764528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.764665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.764689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.764826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.764971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.764996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.765140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.765317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.765376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.765579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.765730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.765757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.765886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.766017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.766042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.766185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.766331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.766356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.766534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.766674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.766699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.766877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.767046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.767070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.325 [2024-07-13 06:21:59.767188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.767308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.325 [2024-07-13 06:21:59.767333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.325 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.767531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.767699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.767727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.767922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.768039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.768063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.768203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.768351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.768375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.768543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.768691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.768715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.768849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.768968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.768993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.769148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.769286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.769314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.769487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.769640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.769664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.769809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.769947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.769973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.770140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.770308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.770333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.770480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.770621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.770645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.770819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.770941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.770966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.771116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.771228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.771252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.771400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.771542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.771567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.771718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.771855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.771888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.772033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.772179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.772203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.772365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.772548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.772580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.772736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.772922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.772947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.773087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.773260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.773285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.773432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.773596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.773623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.773842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.774024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.774049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.774194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.774309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.774334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.774487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.774631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.774656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.774794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.774940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.774965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.775074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.775216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.775240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.775362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.775502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.775527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.775641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.775770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.775798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.776013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.776184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.776211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.776396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.776578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.776605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.776809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.776948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.776973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.777118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.777287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.777311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.326 [2024-07-13 06:21:59.777419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.777533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.326 [2024-07-13 06:21:59.777557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.326 qpair failed and we were unable to recover it. 00:26:53.327 [2024-07-13 06:21:59.777709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.801766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.801800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.802028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.802184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.802208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.802358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.802502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.802526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.818778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.818992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.819019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.819209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.819357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.819381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.819628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.819793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.819816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.820731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.820949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.820976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.821148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.821303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.821326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.821493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.821638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.821663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.821863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.822031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.822056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.822274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.822462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.822486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.822636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.822886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.822911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.823164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.823310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.823332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.823540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.823775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.823799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.823981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.824229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.824272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.824547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.824716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.824739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.824858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.824997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.825023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.825170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.825312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.825336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.825485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.825661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.825685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.825839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.825980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.826005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.826194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.826371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.826394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.826571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.826721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.826745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.826971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.827109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.827133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.588 qpair failed and we were unable to recover it. 00:26:53.588 [2024-07-13 06:21:59.827424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.588 [2024-07-13 06:21:59.827596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.827619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.827775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.827922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.827947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.828131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.828317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.828341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.828464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.828587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.828611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.828784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.828972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.828996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.829117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.829266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.829289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.829465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.829610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.829634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.829855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.830084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.830107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.830413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.830693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.830717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.830879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.831055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.831079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.831261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.831402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.831425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.831593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.831748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.831774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.831958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.832107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.832134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.832284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.832406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.832429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.832583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.832760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.832784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.832963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.833089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.833114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.833264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.833409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.833432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.833683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.833826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.833848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.834010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.834163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.834186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.834334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.834458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.834481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.834600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.834707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.834731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.834848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.834982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.835008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.835300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.835548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.835571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.835732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.836041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.836066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.836242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.836369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.836393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.836650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.836829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.836853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.837015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.837143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.837167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.837349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.837490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.837513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.837772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.837970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.837995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.838164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.838344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.838369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.589 qpair failed and we were unable to recover it. 00:26:53.589 [2024-07-13 06:21:59.838522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.838696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.589 [2024-07-13 06:21:59.838726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.838909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.839092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.839127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.839256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.839394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.839439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.839610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.839763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.839788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.839934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.840084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.840122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.840295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.840533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.840557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.840709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.840851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.840882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.841019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.841210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.841235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.841382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.841535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.841559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.841887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.842085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.842110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.842299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.842448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.842472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.842620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.842742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.842766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.842891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.843010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.843035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.843153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.843301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.843326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.843474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.843648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.843672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.843842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.844007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.844032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.844184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.844361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.844385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.844538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.844684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.844708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.844884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.845036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.845060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.845218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.845393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.845427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.845606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.845753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.845777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.845953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.846082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.846106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.846288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.846439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.846463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.846610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.846734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.846758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.846935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.847110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.847134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.847253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.847387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.847411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.847565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.847742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.847766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.847889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.848007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.848031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.848200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.848356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.848383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.848534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.848717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.848744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.848911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.849120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.849147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.590 qpair failed and we were unable to recover it. 00:26:53.590 [2024-07-13 06:21:59.849333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.590 [2024-07-13 06:21:59.849488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.591 [2024-07-13 06:21:59.849514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.591 qpair failed and we were unable to recover it. 00:26:53.591 [2024-07-13 06:21:59.849643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.591 [2024-07-13 06:21:59.849795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.591 [2024-07-13 06:21:59.849819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.591 qpair failed and we were unable to recover it. 00:26:53.591 [2024-07-13 06:21:59.849946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.591 [2024-07-13 06:21:59.850115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.591 [2024-07-13 06:21:59.850157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.591 qpair failed and we were unable to recover it. 00:26:53.591 [2024-07-13 06:21:59.850340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.591 [2024-07-13 06:21:59.850484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.591 [2024-07-13 06:21:59.850508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.591 qpair failed and we were unable to recover it. 00:26:53.591 [2024-07-13 06:21:59.850682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.591 [2024-07-13 06:21:59.850851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.591 [2024-07-13 06:21:59.850881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.591 qpair failed and we were unable to recover it. 00:26:53.591 [2024-07-13 06:21:59.851032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.591 [2024-07-13 06:21:59.851206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.591 [2024-07-13 06:21:59.851229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.591 qpair failed and we were unable to recover it. 00:26:53.591 [2024-07-13 06:21:59.851342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.591 [2024-07-13 06:21:59.851482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-13 06:22:00.350256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.849 qpair failed and we were unable to recover it. 00:26:53.849 [2024-07-13 06:22:00.350588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.849 [2024-07-13 06:22:00.350801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.350829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.351081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.351287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.351314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.351519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.351650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.351676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.351899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.352045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.352069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.352219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.352364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.352390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.352532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.352720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.352749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.352903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.353067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.353096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.353236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.353434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.353460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.353663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.353809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.353850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.354076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.354284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.354310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.354457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.354634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.354681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.354864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.355053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.355082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.355254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.355404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.355446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.355576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.355741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.355770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.355987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.356210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.356238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.356375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.356545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.356577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.356732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.356980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.357029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.357288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.357415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.357441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.357598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.357802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.357841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.358067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.358226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.358268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.358459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.358656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.358686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.358824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.358968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.358997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.359160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.359296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.359324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.359480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.359654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.850 [2024-07-13 06:22:00.359692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:53.850 qpair failed and we were unable to recover it. 00:26:53.850 [2024-07-13 06:22:00.359933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.360104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.360135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.360348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.360496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.360548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.360696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.360896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.360926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.361088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.361249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.361278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.361470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.361633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.361661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.361838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.362002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.362029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.362158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.362288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.362318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.362474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.362641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.362670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.362845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.363018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.363057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.363258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.363426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.363456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.363606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.363744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.363773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.363934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.364102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.364132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.364313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.364445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.364491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.364675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.364837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.364873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.365028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.365142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.365168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.365336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.365465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.365493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.365631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.365801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.365827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.365994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.366133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.366159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.366344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.366519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.366545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.366704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.366853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.366887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.367030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.367179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.367222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.367370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.367513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.367539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.367678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.367832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.367884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.368058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.368200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.368227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.368377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.368523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.368549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.368725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.368884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.368931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.122 qpair failed and we were unable to recover it. 00:26:54.122 [2024-07-13 06:22:00.369086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.369311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.122 [2024-07-13 06:22:00.369340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.369531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.369667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.369696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.369877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.370076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.370105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.370289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.370472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.370501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.370665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.370864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.370901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.371062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.371190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.371219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.371398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.371591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.371620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.371813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.371943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.371973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.372106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.372246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.372292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.372451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.372585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.372614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.372780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.372920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.372962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.373124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.373289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.373318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.373452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.373612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.373641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.373825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.373959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.373988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.374147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.374260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.374286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.374480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.374686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.374712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.374828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.375032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.375062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.375258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.375435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.375482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.375646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.375822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.375881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.376019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.376203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.376232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.376430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.376600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.376641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.376802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.377019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.377046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.377197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.377317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.377343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.377456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.377658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.377686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.377820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.377978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.378004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.378177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.378338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.378367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.378533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.378716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.378742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.378911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.379091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.379125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.379314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.379462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.379487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.379654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.379835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.379861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.380042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.380200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.380229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.123 qpair failed and we were unable to recover it. 00:26:54.123 [2024-07-13 06:22:00.380379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.123 [2024-07-13 06:22:00.380521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.380547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.380718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.380857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.380913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.381095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.381229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.381257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.381415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.381552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.381578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.381715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.381878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.381908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.382079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.382222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.382249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.382435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.382608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.382642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.382843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.382981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.383011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.383163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.383345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.383374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.383534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.383699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.383725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.383894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.384044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.384073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.384210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.384329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.384355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.384527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.384698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.384724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.384909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.385079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.385106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.385287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.385435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.385461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.385640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.385757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.385784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.385933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.386141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.386174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.386344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.386481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.386507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.386668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.386829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.386858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.387027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.387178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.387204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.387324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.387526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.387555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.387683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.387831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.387860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.388055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.388232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.388259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.388401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.388524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.388550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.388722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.388874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.388904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.389097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.389278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.389325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.389451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.389642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.389672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.389846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.390042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.390071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.390256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.390384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.390425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.390569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.390694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.390720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.390872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.391042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.124 [2024-07-13 06:22:00.391071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.124 qpair failed and we were unable to recover it. 00:26:54.124 [2024-07-13 06:22:00.391211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.391378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.391421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.391577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.391769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.391795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.391908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.392086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.392112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.392310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.392506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.392532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.392673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.392780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.392806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.392978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.393127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.393153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.393333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.393537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.393583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.393778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.393972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.394002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.394163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.394336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.394379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.394565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.394723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.394752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.394886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.395041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.395069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.395246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.395390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.395416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.395593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.395710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.395737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.395936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.396090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.396119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.396285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.396433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.396459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.396631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.396813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.396842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.397017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.397189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.397231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.397418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.397587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.397621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.397791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.397971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.397998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.398113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.398252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.398280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.398411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.398561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.398587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.398761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.398890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.398920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.399086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.399252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.399278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.399416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.399602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.399628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.399802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.399968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.399998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.400144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.400267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.400294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.400436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.400571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.400600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.400754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.400908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.400938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.401103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.401228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.401254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.401373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.401485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.401511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.125 [2024-07-13 06:22:00.401684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.401838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.125 [2024-07-13 06:22:00.401874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.125 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.402036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.402222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.402269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.402440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.402581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.402608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.402783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.402939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.402969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.403093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.403261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.403288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.403477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.403638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.403667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.403815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.403980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.404008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.404173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.404339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.404367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.404537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.404720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.404749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.404919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.405038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.405064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.405186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.405330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.405357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.405500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.405699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.405728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.405931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.406076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.406104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.406249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.406410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.406440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.406585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.406755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.406798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.406961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.407087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.407117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.407263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.407460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.407510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.407653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.407770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.407799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.407924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.408074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.408100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.408228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.408391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.408419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.408569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.408716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.408744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.408928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.409101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.409144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.409316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.409462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.409488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.409689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.409852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.409888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.410030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.410169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.410198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.126 qpair failed and we were unable to recover it. 00:26:54.126 [2024-07-13 06:22:00.410383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.126 [2024-07-13 06:22:00.410530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.410559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.410727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.410900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.410944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.411110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.411257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.411283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.411399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.411539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.411565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.411710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.411842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.411891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.412096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.412244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.412288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.412478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.412642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.412670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.412858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.413073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.413100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.413251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.413421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.413449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.413631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.413793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.413823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.414025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.414169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.414196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.414387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.414584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.414630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.414784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.414906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.414934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.415074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.415216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.415258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.415417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.415581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.415610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.415787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.415944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.415974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.416170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.416316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.416342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.416482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.416600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.416626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.416829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.416998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.417028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.417198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.417347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.417373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.417583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.417755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.417782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.417933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.418052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.418079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.418257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.418464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.418510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.418684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.418832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.418859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.419072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.419204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.419232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.419375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.419512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.419555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.419713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.419951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.419981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.420180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.420385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.420431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.420620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.420804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.420833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.421008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.421132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.421158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.421359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.421493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.421522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.127 qpair failed and we were unable to recover it. 00:26:54.127 [2024-07-13 06:22:00.421682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.127 [2024-07-13 06:22:00.421855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.421892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.422060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.422228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.422259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.422433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.422580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.422606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.422782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.422944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.422974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.423134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.423277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.423323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.423512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.423695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.423725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.423884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.424030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.424074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.424262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.424394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.424424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.424583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.424710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.424739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.424937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.425076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.425102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.425276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.425444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.425474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.425661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.425823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.425856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.426033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.426196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.426225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.426389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.426547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.426576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.426731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.426901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.426944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.427105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.427247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.427273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.427499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.427650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.427677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.427798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.427939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.427967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.428088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.428229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.428256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.428446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.428591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.428617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.428757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.428915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.428945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.429106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.429268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.429297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.429441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.429591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.429617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.429807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.429968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.429998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.430173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.430404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.430450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.430608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.430744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.430775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.430970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.431118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.431162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.431316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.431521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.431568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.431732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.431854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.431893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.432050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.432211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.432240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.432404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.432547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.128 [2024-07-13 06:22:00.432573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.128 qpair failed and we were unable to recover it. 00:26:54.128 [2024-07-13 06:22:00.432753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.432889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.432926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.433116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.433254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.433302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.433463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.433655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.433681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.433799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.433945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.433972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.434103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.434306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.434353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.434545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.434708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.434737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.434922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.435050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.435079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.435243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.435392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.435434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.435608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.435789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.435815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.435962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.436115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.436144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.436302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.436466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.436497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.436668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.436853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.436891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.437083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.437204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.437231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.437376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.437538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.437585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.437771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.437929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.437960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.438155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.438349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.438397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.438585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.438740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.438769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.438931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.439051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.439080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.439234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.439398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.439427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.439622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.439784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.439813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.439956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.440080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.440107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.440304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.440474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.440501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.440641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.440816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.440845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.441046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.441185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.441228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.441396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.441616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.441663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.441851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.442010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.442039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.442163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.442327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.442357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.442523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.442672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.442715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.442879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.443064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.443093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.443255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.443420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.443467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.129 [2024-07-13 06:22:00.443602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.443758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.129 [2024-07-13 06:22:00.443787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.129 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.443916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.444099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.444129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.444310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.444501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.444531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.444716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.444844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.444881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.445012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.445182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.445211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.445383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.445505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.445533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.445755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.445904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.445931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.446121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.446314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.446343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.446512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.446686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.446730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.446939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.447108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.447137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.447325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.447454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.447483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.447672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.447800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.447833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.448000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.448166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.448197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.448363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.448511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.448553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.448756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.448936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.448964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.449110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.449272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.449301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.449494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.449690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.449716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.449855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.450009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.450036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.450164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.450375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.450401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.450574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.450726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.450752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.450903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.451097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.451123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.451265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.451413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.451456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.451615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.451810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.451836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.451968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.452168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.452197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.452362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.452520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.452549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.452713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.452926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.452974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.130 [2024-07-13 06:22:00.453138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.453274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.130 [2024-07-13 06:22:00.453304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.130 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.453500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.453691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.453720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.453891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.454019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.454045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.454189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.454338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.454365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.454505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.454677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.454703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.454892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.455068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.455094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.455243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.455398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.455427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.455591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.455742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.455769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.455897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.456077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.456106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.456335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.456499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.456529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.456658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.456847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.456885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.457083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.457280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.457327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.457501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.457646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.457672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.457843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.458010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.458039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.458189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.458352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.458381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.458550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.458680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.458723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.458880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.459075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.459102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.459233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.459463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.459489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.459650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.459782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.459811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.459974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.460114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.460140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.460315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.460455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.460482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.460665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.460824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.460853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.461026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.461195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.461222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.461402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.461580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.461626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.461814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.461966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.461996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.462203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.462410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.462456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.462587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.462749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.462778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.462977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.463143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.463197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.463321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.463445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.463485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.463671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.463818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.463847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.463976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.464156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.131 [2024-07-13 06:22:00.464185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.131 qpair failed and we were unable to recover it. 00:26:54.131 [2024-07-13 06:22:00.464347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.464496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.464538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.464703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.464855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.464890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.465118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.465333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.465380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.465567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.465764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.465790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.465965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.466126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.466155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.466308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.466493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.466550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.466702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.466898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.466926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.467078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.467255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.467284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.467450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.467625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.467669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.467798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.467983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.468012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.468204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.468328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.468354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.468543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.468704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.468733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.468881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.469031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.469074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.469240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.469412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.469439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.469571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.469775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.469801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.469996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.470159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.470189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.470341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.470453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.470479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.470626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.470745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.470771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.470922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.471090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.471120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.471271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.471435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.471462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.471637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.471832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.471861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.472017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.472141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.472184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.472346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.472535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.472561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.472714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.472896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.472925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.473066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.473231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.473258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.473457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.473614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.473643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.473788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.473965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.473992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.474141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.474289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.474315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.474491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.474602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.474628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.474808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.474925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.474953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.475105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.475354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.132 [2024-07-13 06:22:00.475404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.132 qpair failed and we were unable to recover it. 00:26:54.132 [2024-07-13 06:22:00.475560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.475758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.475784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.475936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.476096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.476122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.476314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.476530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.476596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.476757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.476959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.476986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.477151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.477281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.477310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.477493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.477636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.477679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.477842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.477980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.478010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.478188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.478348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.478377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.478566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.478762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.478791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.478934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.479053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.479080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.479200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.479396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.479425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.479553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.479726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.479754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.479926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.480067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.480096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.480255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.480405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.480431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.480605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.480789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.480818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.481026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.481181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.481208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.481346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.481530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.481559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.481722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.481839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.481864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.482049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.482238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.482299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.482613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.482833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.482862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.483016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.483179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.483209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.483369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.483490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.483517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.483684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.483846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.483898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.484065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.484222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.484252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.484447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.484594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.484620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.484796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.484968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.485000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.485152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.485307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.485334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.485521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.485701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.485731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.485888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.486050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.486079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.486260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.486437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.486512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.133 [2024-07-13 06:22:00.486699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.486858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.133 [2024-07-13 06:22:00.486890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.133 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.487041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.487238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.487311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.487473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.487633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.487662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.487853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.488025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.488051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.488173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.488310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.488339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.488640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.488855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.488895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.489034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.489183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.489210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.489359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.489485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.489511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.489653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.489819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.489849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.489991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.490197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.490223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.490398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.490594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.490650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.490816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.490967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.490995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.491118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.491233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.491259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.491376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.491516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.491582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.491754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.491899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.491926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.492110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.492284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.492310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.492510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.492666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.492694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.492823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.493027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.493054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.493207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.493363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.493392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.493559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.493746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.493775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.493938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.494095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.494125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.494317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.494461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.494488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.494686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.494897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.494945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.495120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.495382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.495434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.495591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.495779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.495808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.495969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.496127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.496156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.496320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.496490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.496517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.496689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.496846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.496883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.497044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.497174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.497203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.497363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.497635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.497665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.497826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.497995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.498025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.134 qpair failed and we were unable to recover it. 00:26:54.134 [2024-07-13 06:22:00.498205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.498443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.134 [2024-07-13 06:22:00.498504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.498701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.498871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.498901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.499071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.499253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.499282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.499449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.499589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.499620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.499815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.499969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.500000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.500153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.500321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.500350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.500537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.500664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.500693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.500878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.501050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.501076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.501197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.501368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.501395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.501575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.501746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.501774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.502004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.502142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.502172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.502364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.502534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.502576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.502748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.502899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.502926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.503053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.503262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.503289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.503438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.503587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.503613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.503805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.503971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.504006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.504204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.504368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.504397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.504554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.504742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.504771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.504933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.505124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.505153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.505317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.505435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.505464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.505662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.505813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.505840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.505998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.506168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.506209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.506487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.506649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.506678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.506873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.507020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.507062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.507231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.507352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.507379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.507544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.507707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.507750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.135 qpair failed and we were unable to recover it. 00:26:54.135 [2024-07-13 06:22:00.507925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.508074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.135 [2024-07-13 06:22:00.508100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.508245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.508397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.508427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.508589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.508734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.508776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.508933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.509088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.509118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.509355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.509588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.509654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.509798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.509967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.509995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.510142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.510261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.510288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.510503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.510653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.510679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.510853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.511027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.511056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.511246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.511450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.511476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.511651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.511775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.511801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.511973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.512138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.512167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.512473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.512667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.512696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.512875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.513024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.513050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.513199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.513396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.513458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.513641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.513803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.513832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.514029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.514165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.514207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.514348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.514492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.514522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.514677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.514827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.514854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.515001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.515188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.515218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.515378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.515514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.515543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.515742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.515896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.515924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.516100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.516212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.516239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.516380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.516538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.516567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.516727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.516962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.517013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.517211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.517434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.517486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.517686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.517879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.517909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.518066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.518282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.518343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.518539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.518718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.518759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.518910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.519086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.519112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.519282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.519405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.519448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.136 qpair failed and we were unable to recover it. 00:26:54.136 [2024-07-13 06:22:00.519635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.136 [2024-07-13 06:22:00.519797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.519825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.520027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.520206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.520232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.520377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.520542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.520570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.520737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.520857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.520894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.521031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.521153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.521179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.521335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.521514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.521540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.521656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.521800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.521826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.521982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.522135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.522161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.522320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.522477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.522505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.522653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.522804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.522837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.523024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.523154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.523183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.523350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.523466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.523491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.523660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.523798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.523826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.524000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.524158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.524186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.524373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.524536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.524565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.524757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.524903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.524941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.525107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.525246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.525276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.525437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.525570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.525598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.525772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.525895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.525950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.526081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.526249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.526279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.526428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.526599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.526628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.526789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.526961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.526987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.527111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.527233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.527259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.527407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.527530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.527555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.527737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.527922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.527948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.528065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.528242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.528268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.528474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.528660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.528689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.528823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.528989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.529016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.529194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.529474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.529529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.529713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.529897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.529941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.530089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.530243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.530268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.137 qpair failed and we were unable to recover it. 00:26:54.137 [2024-07-13 06:22:00.530440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.137 [2024-07-13 06:22:00.530601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.530630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.530812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.530969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.530996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.531119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.531392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.531440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.531624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.531785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.531814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.532007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.532154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.532198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.532420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.532658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.532709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.532910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.533081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.533107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.533254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.533401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.533428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.533612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.533761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.533791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.533928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.534075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.534101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.534376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.534627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.534665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.534841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.535054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.535080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.535229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.535374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.535417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.535676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.535856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.535892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.536099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.536300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.536349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.536535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.536733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.536759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.536934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.537082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.537108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.537296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.537457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.537486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.537639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.537809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.537836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.537987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.538107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.538133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.538307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.538454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.538480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.538644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.538810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.538838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.539018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.539141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.539168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.539342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.539501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.539529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.539700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.539825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.539851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.540005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.540161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.540190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.540334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.540497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.540522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.540694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.540818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.540846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.541022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.541165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.541207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.541339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.541516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.541542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.541690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.541898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.138 [2024-07-13 06:22:00.541928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.138 qpair failed and we were unable to recover it. 00:26:54.138 [2024-07-13 06:22:00.542064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.542224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.542252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.542416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.542584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.542627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.542820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.542984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.543013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.543160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.543458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.543506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.543669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.543852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.543887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.544047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.544193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.544218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.544396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.544585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.544613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.544772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.544992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.545021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.545177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.545336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.545368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.545532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.545694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.545723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.545884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.546028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.546054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.546176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.546317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.546342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.546509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.546664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.546692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.546862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.547010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.547052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.547234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.547433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.547459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.547631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.547756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.547782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.547921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.548051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.548079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.548239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.548381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.548406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.548579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.548741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.548769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.548909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.549069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.549097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.549225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.549360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.549389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.549552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.549694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.549734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.549945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.550119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.550145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.550310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.550463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.550491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.550638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.550781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.550807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.550953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.551075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.551101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.139 qpair failed and we were unable to recover it. 00:26:54.139 [2024-07-13 06:22:00.551239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.139 [2024-07-13 06:22:00.551401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.551429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.551589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.551736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.551764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.551935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.552088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.552117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.552311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.552471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.552499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.552632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.552813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.552841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.553022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.553188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.553214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.553363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.553569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.553594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.553737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.553850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.553882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.554067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.554202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.554231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.554480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.554641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.554671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.554811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.554980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.555009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.555194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.555343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.555369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.555516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.555701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.555730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.555897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.556020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.556049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.556187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.556358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.556384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.556523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.556671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.556697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.556893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.557076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.557105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.557268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.557448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.557488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.557626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.557780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.557808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.557955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.558128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.558154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.558347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.558509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.558537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.558694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.558849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.558887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.559028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.559190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.559219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.559384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.559527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.559554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.559729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.559925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.559952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.560104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.560247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.560273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.560434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.560619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.560647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.560812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.560961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.560987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.561168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.561340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.561366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.561490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.561656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.561685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.561834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.561982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.140 [2024-07-13 06:22:00.562010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.140 qpair failed and we were unable to recover it. 00:26:54.140 [2024-07-13 06:22:00.562202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.562366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.562394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.562556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.562714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.562742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.562988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.563154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.563183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.563357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.563565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.563615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.563807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.563977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.564019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.564182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.564368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.564429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.564654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.564871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.564901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.565060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.565254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.565280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.565418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.565564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.565604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.565732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.565893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.565923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.566089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.566282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.566308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.566452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.566621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.566646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.566817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.566964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.566991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.567169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.567339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.567368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.567650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.567827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.567855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.568025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.568200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.568226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.568395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.568551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.568579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.568767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.568929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.568958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.569121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.569279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.569308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.569468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.569610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.569635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.569781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.569929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.569956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.570102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.570271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.570300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.570420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.570569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.570597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.570742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.570891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.570919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.571068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.571212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.571238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.571380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.571562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.571590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.571787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.571974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.572001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.572144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.572255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.572281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.572430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.572567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.572593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.572766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.572919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.572948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.141 [2024-07-13 06:22:00.573136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.573346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.141 [2024-07-13 06:22:00.573398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.141 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.573600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.573736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.573764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.573901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.574025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.574051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.574172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.574371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.574399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.574573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.574714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.574739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.574913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.575158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.575212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.575355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.575464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.575490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.575631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.575797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.575826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.575961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.576132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.576160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.576319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.576470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.576499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.576644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.576781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.576807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.576980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.577147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.577173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.577313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.577585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.577631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.577835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.578010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.578039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.578210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.578403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.578431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.578589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.578748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.578776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.578936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.579132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.579157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.579298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.579502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.579560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.579749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.579919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.579986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.580140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.580388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.580447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.580620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.580751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.580780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.580960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.581142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.581170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.581363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.581533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.581559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.581674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.581846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.581889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.582009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.582194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.582220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.582337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.582485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.582510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.582650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.582796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.582822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.583024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.583184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.583212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.583399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.583552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.583580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.583742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.583909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.583935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.584082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.584269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.584296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.142 qpair failed and we were unable to recover it. 00:26:54.142 [2024-07-13 06:22:00.584456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.584607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.142 [2024-07-13 06:22:00.584635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.584846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.585014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.585040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.585202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.585358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.585391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.585582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.585700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.585741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.585915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.586078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.586106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.586331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.586458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.586486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.586666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.586828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.586856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.587007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.587155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.587180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.587338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.587509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.587535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.587722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.587921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.587959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.588141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.588247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.588273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.588440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.588597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.588625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.588754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.588901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.588930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.589093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.589275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.589303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.589461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.589645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.589673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.589808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.589952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.589978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.590142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.590268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.590298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.590500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.590704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.590733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.590893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.591055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.591084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.591227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.591368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.591394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.591542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.591709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.591735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.591841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.592038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.592067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.592256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.592439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.592483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.592682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.592878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.592907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.593072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.593217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.593242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.593395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.593729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.593784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.593954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.594091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.594119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.143 qpair failed and we were unable to recover it. 00:26:54.143 [2024-07-13 06:22:00.594290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.143 [2024-07-13 06:22:00.594431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.594456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.594631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.594817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.594845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.594984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.595135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.595164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.595324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.595559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.595621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.595814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.595961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.595988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.596163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.596304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.596329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.596486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.596610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.596635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.596781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.596892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.596918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.597039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.597156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.597182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.597322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.597493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.597518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.597644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.597796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.597821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.597999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.598119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.598144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.598262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.598431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.598456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.598596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.598768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.598793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.598946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.599085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.599111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.599235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.599357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.599383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.599529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.599674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.599700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.599824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.599944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.599970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.600084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.600223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.600249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.600362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.600510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.600536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.600715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.600840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.600875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.601047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.601163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.601189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.601329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.601502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.601561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.601756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.601896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.601926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.602095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.602220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.602245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.602397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.602506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.602532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.602711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.602880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.602911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.603051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.603191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.603216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.603356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.603493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.603519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.603666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.603807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.603833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.603981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.604127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.144 [2024-07-13 06:22:00.604152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.144 qpair failed and we were unable to recover it. 00:26:54.144 [2024-07-13 06:22:00.604306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.604447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.604473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.604588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.604756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.604797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.604986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.605140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.605169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.605407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.605567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.605595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.605766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.605907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.605934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.606105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.606275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.606301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.606476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.606621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.606647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.606789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.606968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.606998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.607156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.607306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.607334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.607498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.607641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.607667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.607812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.607930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.607956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.608139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.608284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.608310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.608504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.608626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.608654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.608807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.608951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.608994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.609148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.609308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.609337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.609578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.609738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.609766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.609914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.610105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.610132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.610277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.610446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.610472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.610685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.610859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.610893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.611041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.611150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.611175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.611345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.611494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.611519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.611664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.611839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.611873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.612020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.612216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.612244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.612444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.612614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.612640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.612755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.612903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.612930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.613073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.613192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.613218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.613371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.613515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.613540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.613663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.613781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.613806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.613977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.614096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.614122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.614269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.614407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.614433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.145 [2024-07-13 06:22:00.614580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.614718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.145 [2024-07-13 06:22:00.614744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.145 qpair failed and we were unable to recover it. 00:26:54.146 [2024-07-13 06:22:00.614862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.615081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.615119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.146 qpair failed and we were unable to recover it. 00:26:54.146 [2024-07-13 06:22:00.615250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.615373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.615398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.146 qpair failed and we were unable to recover it. 00:26:54.146 [2024-07-13 06:22:00.615538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.615693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.615720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.146 qpair failed and we were unable to recover it. 00:26:54.146 [2024-07-13 06:22:00.615850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.616028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.616054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.146 qpair failed and we were unable to recover it. 00:26:54.146 [2024-07-13 06:22:00.616180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.616329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.616354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.146 qpair failed and we were unable to recover it. 00:26:54.146 [2024-07-13 06:22:00.616528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.616721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.616759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.146 qpair failed and we were unable to recover it. 00:26:54.146 [2024-07-13 06:22:00.616927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.617072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.617100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.146 qpair failed and we were unable to recover it. 00:26:54.146 [2024-07-13 06:22:00.617253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.617428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.617456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.146 qpair failed and we were unable to recover it. 00:26:54.146 [2024-07-13 06:22:00.617601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.617752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.617778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.146 qpair failed and we were unable to recover it. 00:26:54.146 [2024-07-13 06:22:00.617922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.618063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.618089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.146 qpair failed and we were unable to recover it. 00:26:54.146 [2024-07-13 06:22:00.618234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.618377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.618403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.146 qpair failed and we were unable to recover it. 00:26:54.146 [2024-07-13 06:22:00.618542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.618739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.618779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.146 qpair failed and we were unable to recover it. 00:26:54.146 [2024-07-13 06:22:00.618947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.619097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.146 [2024-07-13 06:22:00.619124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-07-13 06:22:00.619309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.619434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.619462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-07-13 06:22:00.619651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.619816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.619853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-07-13 06:22:00.620038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.620173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.620215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-07-13 06:22:00.620356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.620524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.620562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-07-13 06:22:00.620734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.620881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.620920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-07-13 06:22:00.621090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.621243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.621272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-07-13 06:22:00.621451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.621568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.621594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-07-13 06:22:00.621748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.621862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.621925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-07-13 06:22:00.622094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.622252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.622281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-07-13 06:22:00.622466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.622633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.622662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.419 qpair failed and we were unable to recover it. 00:26:54.419 [2024-07-13 06:22:00.622833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.419 [2024-07-13 06:22:00.623020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.623048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.623196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.623340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.623365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.623540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.623656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.623681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.623877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.623994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.624019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.624205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.624345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.624370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.624487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.624658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.624684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.624852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.625011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.625036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.625186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.625328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.625353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.625472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.625613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.625638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.625749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.625896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.625924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.626101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.626244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.626270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.626420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.626557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.626583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.626721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.626872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.626899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.627075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.627325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.627385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.627569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.627706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.627732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.627919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.628091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.628117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.628241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.628360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.628385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.628533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.628677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.628703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.628876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.629021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.629047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.629173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.629281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.629306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.629484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.629627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.629652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.629798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.629943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.629970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.630082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.630202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.630228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.630351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.630498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.630523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.630661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.630837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.630863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.630985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.631141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.631167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.631340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.631529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.631558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.631744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.631882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.631908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.632046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.632192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.632220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.632373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.632493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.632518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.420 [2024-07-13 06:22:00.632635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.632807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.420 [2024-07-13 06:22:00.632833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.420 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.632958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.633073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.633098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.633247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.633422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.633448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.633601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.633752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.633779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.633904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.634022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.634048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.634198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.634346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.634373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.634514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.634656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.634681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.634854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.635012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.635038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.635224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.635413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.635474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.635636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.635816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.635842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.636001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.636150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.636176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.636324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.636440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.636465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.636581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.636746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.636772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.636919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.637096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.637126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.637274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.637446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.637472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.637642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.637788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.637815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.637963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.638109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.638135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.638281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.638422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.638447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.638561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.638680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.638705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.638847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.638963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.638989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.639138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.639312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.639338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.639479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.639654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.639679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.639819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.639968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.639995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.640165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.640282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.640308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.640494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.640651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.640681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.640829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.640993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.641019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.641159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.641272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.641297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.641537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.641696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.641726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.641853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.642056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.642085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.642275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.642388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.642414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.642584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.642699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.642725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.421 qpair failed and we were unable to recover it. 00:26:54.421 [2024-07-13 06:22:00.642899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.421 [2024-07-13 06:22:00.643041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.643067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.643242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.643353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.643379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.643529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.643668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.643693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.643811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.643984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.644010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.644128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.644277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.644304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.644446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.644596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.644622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.644742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.644861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.644895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.645039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.645184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.645210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.645357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.645508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.645534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.645683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.645829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.645856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.646019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.646188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.646214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.646381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.646530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.646556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.646756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.646927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.646954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.647103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.647226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.647251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.647397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.647514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.647539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.647681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.647826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.647851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.648024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.648207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.648236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.648387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.648591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.648661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.648873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.649021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.649046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.649195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.649338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.649363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.649538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.649694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.649720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.649858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.649982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.650007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.650155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.650291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.650317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.650460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.650607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.650633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.650789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.650907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.650934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.651080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.651205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.651232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.651375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.651520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.651546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.651720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.651861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.651895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.652017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.652154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.652183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.652351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.652474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.652500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.652643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.652790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.422 [2024-07-13 06:22:00.652816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.422 qpair failed and we were unable to recover it. 00:26:54.422 [2024-07-13 06:22:00.652959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.653106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.653131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.653304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.653444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.653469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.653641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.653808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.653837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.654000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.654127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.654152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.654298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.654418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.654444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.654612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.654765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.654793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.654939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.655101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.655127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.655295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.655437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.655463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.655572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.655714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.655740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.655919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.656068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.656093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.656284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.656425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.656451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.656594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.656734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.656759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.656911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.657057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.657087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.657236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.657376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.657417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.657551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.657673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.657701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.657902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.658045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.658073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.658223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.658400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.658426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.658564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.658710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.658747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.658914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.659027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.659053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.659200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.659349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.659377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.659553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.659670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.659696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.659875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.659997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.660023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.660161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.660283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.660309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.660485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.660629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.660655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.660829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.660973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.661000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.661147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.661290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.661316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.423 qpair failed and we were unable to recover it. 00:26:54.423 [2024-07-13 06:22:00.661486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.423 [2024-07-13 06:22:00.661633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.661658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.661808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.661920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.661947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.662122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.662272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.662297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.662417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.662605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.662634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.662762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.662902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.662929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.663101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.663245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.663271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.663407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.663598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.663626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.663781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.663949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.663976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.664121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.664242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.664267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.664414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.664553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.664579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.664772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.664922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.664951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.665111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.665416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.665468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.665667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.665816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.665842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.665976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.666125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.666151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.666275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.666389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.666415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.666566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.666724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.666750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.666901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.667050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.667076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.667217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.667369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.667394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.667511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.667669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.667694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.667809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.667988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.668016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.668129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.668247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.668275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.668449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.668573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.668601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.668748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.668894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.668921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.669074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.669193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.669219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.669391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.669534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.669563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.669747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.669920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.669946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.670118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.670263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.670289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.670439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.670600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.670626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.670775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.670916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.670942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.424 qpair failed and we were unable to recover it. 00:26:54.424 [2024-07-13 06:22:00.671085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.424 [2024-07-13 06:22:00.671202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.671227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.671376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.671550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.671575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.671715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.671854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.671888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.672006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.672145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.672171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.672291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.672438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.672465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.672615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.672759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.672785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.672902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.673047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.673073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.673192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.673336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.673362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.673474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.673632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.673664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.673854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.674001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.674030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.674189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.674342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.674370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.674518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.674687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.674713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.674852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.674972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.674998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.675138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.675336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.675398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.675668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.675846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.675897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.676090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.676216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.676245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.676434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.676577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.676602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.676745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.676882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.676926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.677075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.677224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.677250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.677402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.677538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.677564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.677702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.677876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.677903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.678015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.678186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.678212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.678365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.678512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.678538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.678710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.678825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.678850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.678998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.679172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.679197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.679342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.679480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.679505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.679623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.679786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.679814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.679978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.680131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.680159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.680359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.680481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.680507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.680651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.680798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.680823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.680986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.681183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.425 [2024-07-13 06:22:00.681212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.425 qpair failed and we were unable to recover it. 00:26:54.425 [2024-07-13 06:22:00.681365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.681503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.681531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.681715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.681903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.681933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.682055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.682205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.682234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.682535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.682751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.682780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.682908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.683072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.683101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.683258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.683407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.683434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.683609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.683758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.683783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.683967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.684102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.684131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.684319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.684505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.684534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.684667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.684811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.684836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.684972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.685116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.685142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.685312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.685457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.685482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.685625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.685746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.685772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.685921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.686039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.686065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.686253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.686377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.686402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.686577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.686748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.686773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.686916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.687091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.687120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.687285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.687427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.687453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.687598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.687748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.687774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.687927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.688100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.688126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.688268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.688438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.688464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.688641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.688755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.688781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.688952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.689104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.689129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.689299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.689447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.689473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.689642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.689784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.689810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.689961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.690137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.690163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.690335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.690475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.690500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.690617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.690729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.690755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.690915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.691079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.691112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.691255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.691422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.691448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.426 [2024-07-13 06:22:00.691592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.691743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.426 [2024-07-13 06:22:00.691768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.426 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.691890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.692048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.692075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.692215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.692328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.692353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.692510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.692683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.692709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.692881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.693027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.693053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.693171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.693330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.693355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.693501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.693647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.693673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.693846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.694026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.694052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.694249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.694406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.694431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.694555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.694693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.694719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.694859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.694985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.695011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.695154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.695271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.695297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.695439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.695604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.695633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.695766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.695918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.695948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.696107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.696311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.696365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.696609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.696749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.696774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.696947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.697091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.697117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.697262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.697401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.697427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.697549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.697691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.697716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.697862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.698006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.698032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.698178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.698296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.698322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.698439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.698621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.698646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.698754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.698874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.698904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.699129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.699275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.699300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.699479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.699630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.699655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.699802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.699948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.699975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.700096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.700241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.700267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.700419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.700590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.700615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.700763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.700970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.700996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.701149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.701315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.701340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.701492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.701639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.701664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.427 [2024-07-13 06:22:00.701816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.702039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.427 [2024-07-13 06:22:00.702066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.427 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.702239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.702450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.702504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.702678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.702821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.702847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.702977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.703124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.703149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.703304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.703450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.703475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.703645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.703796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.703821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.703950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.704087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.704112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.704285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.704473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.704540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.704709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.704862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.704897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.705063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.705211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.705236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.705351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.705519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.705545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.705689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.705816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.705842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.705964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.706105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.706130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.706238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.706380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.706405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.706585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.706734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.706760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.706884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.707054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.707080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.707198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.707344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.707369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.707557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.707746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.707774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.707903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.708083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.708113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.708237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.708347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.708374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.708547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.708664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.708689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.708841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.708970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.708997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.709144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.709294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.709319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.709463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.709627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.709655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.709859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.710006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.710032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.710150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.710258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.710283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.710456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.710597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.710622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.428 qpair failed and we were unable to recover it. 00:26:54.428 [2024-07-13 06:22:00.710811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.428 [2024-07-13 06:22:00.710950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.710977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.711092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.711239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.711268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.711424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.711582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.711607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.711750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.711887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.711915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.712055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.712237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.712265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.712435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.712577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.712602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.712728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.712895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.712924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.713088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.713297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.713347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.713501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.713664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.713690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.713832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.713959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.713985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.714154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.714277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.714305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.714498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.714643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.714685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.714853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.715048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.715076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.715270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.715458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.715486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.715614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.715775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.715803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.715943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.716110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.716138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.716297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.716445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.716473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.716624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.716746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.716773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.716945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.717121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.717146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.717302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.717490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.717516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.717660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.717823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.717851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.718007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.718157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.718182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.718371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.718676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.718727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.718907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.719051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.719076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.719236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.719396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.719422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.719569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.719693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.719735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.719885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.720058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.720084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.720340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.720533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.720561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.720724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.720921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.720947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.721092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.721235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.721277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.721433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.721566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.429 [2024-07-13 06:22:00.721596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.429 qpair failed and we were unable to recover it. 00:26:54.429 [2024-07-13 06:22:00.721759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.721927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.721953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.722105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.722261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.722289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.722443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.722612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.722653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.722801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.722947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.722973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.723114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.723285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.723326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.723495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.723639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.723665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.723809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.723921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.723948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.724092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.724254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.724281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.724491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.724614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.724643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.724811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.724991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.725020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.725191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.725333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.725359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.725608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.725803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.725828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.725980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.726131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.726156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.726303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.726448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.726476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.726639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.726810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.726835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.726992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.727140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.727187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.727459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.727631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.727672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.727877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.728023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.728048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.728196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.728317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.728342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.728452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.728593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.728619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.728847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.729064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.729090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.729278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.729398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.729428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.729579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.729696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.729721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.729916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.730072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.730101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.730232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.730357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.730385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.730544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.730681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.730709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.730937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.731128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.731156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.731307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.731538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.731565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.731752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.731909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.731937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.732125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.732398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.732444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.430 qpair failed and we were unable to recover it. 00:26:54.430 [2024-07-13 06:22:00.732620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.732768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.430 [2024-07-13 06:22:00.732811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.732957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.733069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.733095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.733291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.733479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.733525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.733709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.733842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.733878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.734072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.734221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.734247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.734418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.734564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.734590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.734841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.735035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.735063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.735250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.735493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.735552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.735743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.735895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.735922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.736063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.736212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.736241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.736508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.736718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.736746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.736906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.737093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.737121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.737292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.737434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.737474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.737707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.737874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.737903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.738091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.738317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.738345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.738495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.738748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.738800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.739005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.739192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.739221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.739359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.739546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.739575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.739772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.739944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.739986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.740146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.740309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.740337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.740476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.740619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.740645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.740822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.741013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.741042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.741234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.741451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.741497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.741657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.741818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.741846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.742001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.742150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.742176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.742377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.742565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.742593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.742752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.742877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.742906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.743029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.743224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.743253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.743419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.743559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.743600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.743786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.743958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.743988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.744224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.744478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.744504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.431 qpair failed and we were unable to recover it. 00:26:54.431 [2024-07-13 06:22:00.744666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.431 [2024-07-13 06:22:00.744855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.744899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.745071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.745233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.745259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.745380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.745586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.745612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.745755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.745920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.745949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.746109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.746272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.746298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.746444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.746595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.746621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.746794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.746991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.747018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.747134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.747398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.747451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.747642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.747791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.747820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.747985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.748160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.748202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.748357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.748504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.748530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.748667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.748825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.748857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.749055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.749227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.749253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.749393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.749568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.749611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.749799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.749976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.750003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.750180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.750430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.750493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.750691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.750804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.750830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.750964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.751082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.751107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.751285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.751480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.751508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.751676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.751834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.751864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.752042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.752226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.752267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.752407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.752550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.752576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.752697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.752888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.752915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.753110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.753321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.753350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.753529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.753796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.753848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.754016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.754158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.754184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.754386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.754563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.432 [2024-07-13 06:22:00.754590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.432 qpair failed and we were unable to recover it. 00:26:54.432 [2024-07-13 06:22:00.754737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.754884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.754911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.755083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.755247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.755275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.755438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.755621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.755649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.755785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.755927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.755953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.756126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.756301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.756327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.756476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.756627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.756652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.756769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.756913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.756939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.757122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.757265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.757290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.757429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.757612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.757640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.757827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.757989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.758018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.758184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.758356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.758399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.758554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.758713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.758741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.758901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.759058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.759086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.759273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.759397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.759425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.759600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.759711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.759736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.759916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.760058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.760084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.760289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.760433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.760459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.760639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.760794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.760823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.761010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.761133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.761175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.761339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.761490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.761518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.761712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.761825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.761852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.762007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.762155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.762181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.762358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.762499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.762525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.762690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.762879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.762908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.763065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.763357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.763406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.763564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.763696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.763726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.763899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.764046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.764072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.764212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.764389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.764417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.764577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.764767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.764795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.764920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.765114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.765142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.433 [2024-07-13 06:22:00.765305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.765451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.433 [2024-07-13 06:22:00.765477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.433 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.765648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.765816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.765844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.765986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.766139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.766167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.766326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.766498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.766524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.766665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.766823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.766848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.767029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.767203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.767276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.767542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.767729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.767754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.767912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.768114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.768142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.768337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.768567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.768619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.768802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.768960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.768989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.769252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.769463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.769489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.769627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.769801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.769844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.770046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.770203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.770236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.770415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.770574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.770602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.770738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.770936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.770963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.771102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.771248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.771278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.771449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.771569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.771612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.771752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.771948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.771975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.772091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.772282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.772310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.772501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.772668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.772696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.772834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.773009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.773051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.773184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.773309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.773337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.773613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.773809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.773834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.774033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.774311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.774372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.774550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.774700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.774726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.774876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.775073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.775102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.775292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.775454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.775482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.775641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.775826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.775854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.776044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.776167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.776192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.776388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.776625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.776685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.776825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.777095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.777145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.434 qpair failed and we were unable to recover it. 00:26:54.434 [2024-07-13 06:22:00.777308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.777480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.434 [2024-07-13 06:22:00.777506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.777623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.777772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.777798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.777983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.778128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.778153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.778435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.778558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.778586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.778748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.778884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.778913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.779080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.779220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.779246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.779411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.779570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.779596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.779704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.779850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.779883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.780027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.780190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.780218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.780380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.780525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.780550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.780728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.780892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.780921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.781082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.781224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.781250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.781423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.781608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.781633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.781802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.781980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.782007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.782123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.782295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.782324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.782557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.782780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.782808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.782970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.783122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.783151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.783314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.783468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.783494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.783636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.783807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.783832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.784036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.784217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.784260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.784443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.784625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.784667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.784801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.784951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.784978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.785152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.785375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.785422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.785602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.785728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.785755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.785940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.786082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.786107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.786255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.786393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.786439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.786626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.786783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.786811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.786978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.787197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.787253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.787415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.787592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.787617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.787792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.787987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.788017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.788202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.788367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.788395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.435 qpair failed and we were unable to recover it. 00:26:54.435 [2024-07-13 06:22:00.788712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.435 [2024-07-13 06:22:00.788931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.788960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.789137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.789271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.789296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.789446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.789565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.789591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.789776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.789886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.789913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.790084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.790327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.790386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.790572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.790747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.790773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.790888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.791035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.791061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.791202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.791344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.791370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.791513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.791683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.791712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.791899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.792035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.792063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.792230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.792380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.792405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.792595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.792751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.792779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.792959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.793120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.793149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.793335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.793577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.793605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.793769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.793914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.793940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.794126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.794330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.794379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.794577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.794770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.794798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.794961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.795088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.795117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.795308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.795456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.795481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.795598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.795744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.795771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.795952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.796133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.796159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.796344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.796513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.796564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.796726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.796883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.796927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.797084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.797241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.797270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.797557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.797770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.797798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.797972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.798120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.798146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.436 [2024-07-13 06:22:00.798315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.798455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.436 [2024-07-13 06:22:00.798481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.436 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.798631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.798807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.798835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.799010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.799153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.799178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.799349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.799482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.799510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.799655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.799768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.799794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.799956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.800103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.800129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.800316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.800478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.800506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.800691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.800851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.800886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.801022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.801191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.801216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.801363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.801544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.801570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.801693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.801884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.801914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.802045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.802202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.802231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.802372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.802541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.802566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.802713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.802910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.802940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.803101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.803245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.803271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.803437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.803624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.803653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.803793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.803937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.803964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.804136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.804263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.804352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.804606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.804791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.804819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.804969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.805160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.805189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.805359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.805503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.805528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.805691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.805852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.805889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.806055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.806244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.806270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.806454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.806642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.806671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.806826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.806980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.807007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.807207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.807371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.807399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.807600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.807725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.807750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.807895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.808053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.808082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.808246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.808420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.808462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.808621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.808751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.808784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.808981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.809120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.809148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.437 qpair failed and we were unable to recover it. 00:26:54.437 [2024-07-13 06:22:00.809339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.437 [2024-07-13 06:22:00.809510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.809536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.809679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.809796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.809822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.809981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.810129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.810154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.810323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.810480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.810528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.810666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.810855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.810886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.811033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.811179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.811221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.811414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.811544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.811573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.811761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.811958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.811985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.812145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.812271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.812300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.812443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.812614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.812655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.812787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.812971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.813001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.813140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.813327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.813354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.813516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.813672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.813700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.813872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.813994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.814020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.814197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.814335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.814360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.814585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.814766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.814794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.814984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.815145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.815174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.815352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.815490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.815531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.815699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.815839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.815873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.816040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.816256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.816309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.816449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.816608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.816636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.816826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.817022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.817051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.817179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.817342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.817370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.817568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.817763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.817789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.817936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.818087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.818112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.818291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.818407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.818432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.818560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.818733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.818762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.818929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.819047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.819073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.819199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.819336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.819361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.819512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.819699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.819728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.819885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.820020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.820049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.438 qpair failed and we were unable to recover it. 00:26:54.438 [2024-07-13 06:22:00.820187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.438 [2024-07-13 06:22:00.820427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.820477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.820636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.820796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.820824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.821009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.821153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.821197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.821384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.821534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.821562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.821717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.821902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.821932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.822120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.822248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.822277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.822447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.822588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.822613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.822763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.822935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.822962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.823107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.823313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.823342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.823478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.823642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.823670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.823813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.823961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.823987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.824160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.824348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.824403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.824653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.824832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.824860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.825030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.825267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.825328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.825490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.825659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.825749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.825935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.826099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.826128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.826279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.826535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.826590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.826727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.826885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.826915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.827100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.827251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.827281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.827427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.827595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.827623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.827813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.827993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.828022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.828214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.828433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.828486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.828678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.828889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.828919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.829089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.829256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.829282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.829472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.829809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.829872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.830068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.830231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.830259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.830396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.830539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.830581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.830742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.830926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.830956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.831115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.831422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.831480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.831672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.831830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.831858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.832054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.832169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.439 [2024-07-13 06:22:00.832195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.439 qpair failed and we were unable to recover it. 00:26:54.439 [2024-07-13 06:22:00.832369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.832551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.832584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.832758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.832940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.832969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.833127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.833334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.833379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.833547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.833689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.833714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.833888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.834047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.834075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.834348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.834636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.834664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.834825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.835018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.835047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.835191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.835302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.835328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.835503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.835648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.835677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.835864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.836025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.836054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.836244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.836501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.836560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.836728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.836850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.836892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.837017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.837202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.837228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.837378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.837517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.837546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.837739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.837878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.837905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.838093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.838234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.838276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.838435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.838626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.838652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.838776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.838989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.839015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.839183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.839360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.839407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.839601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.839792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.839820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.839995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.840146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.840188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.840354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.840501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.840527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.840665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.840807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.840833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.841049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.841219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.841260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.841437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.841552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.841582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.841772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.841916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.841943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.842126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.842247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.842275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.842425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.842574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.842616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.842770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.842941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.842970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.843131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.843436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.843495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.440 [2024-07-13 06:22:00.843676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.843822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.440 [2024-07-13 06:22:00.843849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.440 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.844086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.844259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.844310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.844498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.844657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.844685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.844890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.845047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.845075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.845239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.845367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.845395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.845550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.845696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.845723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.845876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.846051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.846077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.846188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.846328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.846354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.846499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.846672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.846701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.846826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.846954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.846980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.847151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.847362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.847413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.847660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.847854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.847888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.848012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.848181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.848207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.848345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.848530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.848555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.848671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.848813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.848838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.849019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.849272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.849330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.849525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.849686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.849714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.849903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.850027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.850053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.850201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.850319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.850349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.850503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.850612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.850637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.850753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.850902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.850929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.851074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.851195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.851221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.851338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.851486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.851512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.851686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.851823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.851849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.852038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.852155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.852181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.852353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.852497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.852523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.852644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.852762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.441 [2024-07-13 06:22:00.852787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.441 qpair failed and we were unable to recover it. 00:26:54.441 [2024-07-13 06:22:00.852933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.853078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.853104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.853250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.853392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.853417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.853542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.853677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.853703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.853845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.854004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.854030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.854146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.854318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.854343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.854484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.854623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.854648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.854806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.854947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.854974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.855117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.855235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.855260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.855430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.855600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.855626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.855821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.856001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.856027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.856171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.856283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.856309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.856450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.856626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.856652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.856809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.856930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.856956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.857101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.857244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.857270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.857443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.857629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.857658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.857816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.857935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.857965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.858138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.858290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.858315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.858464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.858604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.858630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.858802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.858950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.858977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.859171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.859313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.859339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.859486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.859636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.859662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.859806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.859952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.859978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.860120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.860239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.860264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.860413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.860529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.860555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.860678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.860825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.860850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.861009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.861168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.861196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.861342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.861489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.861514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.861654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.861804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.861830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.861973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.862086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.862112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.862258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.862403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.862429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.442 [2024-07-13 06:22:00.862615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.862753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.442 [2024-07-13 06:22:00.862779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.442 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.862922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.863062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.863088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.863266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.863474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.863523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.863698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.863806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.863832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.863952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.864094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.864119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.864291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.864458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.864483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.864622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.864792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.864818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.864995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.865143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.865168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.865312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.865422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.865448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.865598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.865744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.865769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.865912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.866025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.866051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.866221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.866367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.866392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.866527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.866664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.866694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.866803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.866950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.866976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.867127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.867240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.867266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.867385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.867524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.867550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.867723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.867893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.867920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.868091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.868205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.868230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.868409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.868585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.868610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.868747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.868888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.868914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.869054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.869201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.869227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.869365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.869518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.869544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.869697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.869842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.869874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.870063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.870183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.870208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.870356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.870503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.870528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.870666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.870857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.870893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.871028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.871183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.871211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.871371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.871516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.871542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.871716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.871838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.871873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.872022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.872138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.872164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.872305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.872444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.872470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.443 qpair failed and we were unable to recover it. 00:26:54.443 [2024-07-13 06:22:00.872612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.872751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.443 [2024-07-13 06:22:00.872777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.872892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.873042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.873068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.873218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.873338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.873364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.873486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.873609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.873635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.873805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.873984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.874011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.874178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.874294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.874320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.874468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.874586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.874611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.874766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.874969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.874999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.875136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.875293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.875322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.875484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.875624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.875649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.875806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.875956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.875982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.876130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.876251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.876277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.876405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.876564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.876590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.876728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.876847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.876898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.877068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.877188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.877214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.877335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.877514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.877540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.877695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.877839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.877874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.878023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.878169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.878195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.878305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.878437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.878463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.878584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.878705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.878730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.878844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.879002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.879029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.879199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.879370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.879395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.879535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.879664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.879691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.879801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.879946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.879972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.880094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.880239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.880265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.880385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.880496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.880522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.880638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.880790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.880815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.880990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.881097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.881123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.881273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.881415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.881441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.881618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.881788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.881813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.881956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.882072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.882098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.444 qpair failed and we were unable to recover it. 00:26:54.444 [2024-07-13 06:22:00.882248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.444 [2024-07-13 06:22:00.882370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.882397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.882518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.882640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.882670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.882823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.882992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.883019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.883145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.883287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.883313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.883463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.883584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.883610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.883783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.883896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.883923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.884064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.884205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.884231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.884404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.884512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.884538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.884680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.884851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.884905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.885081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.885252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.885278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.885414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.885552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.885577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.885715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.885887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.885913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.886038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.886212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.886238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.886385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.886582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.886607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.886777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.886897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.886924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.887046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.887194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.887220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.887365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.887526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.887554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.887705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.887857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.887895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.888062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.888211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.888237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.888390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.888528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.888553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.888726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.888879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.888905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.889057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.889205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.889230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.889414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.889559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.889586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.889732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.889876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.889903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.890073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.890242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.890267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.890434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.890600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.890629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.445 qpair failed and we were unable to recover it. 00:26:54.445 [2024-07-13 06:22:00.890814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.890948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.445 [2024-07-13 06:22:00.890974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.891101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.891330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.891381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.891661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.891834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.891862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.892039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.892201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.892229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.892375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.892544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.892570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.892739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.892889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.892915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.893066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.893252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.893280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.893405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.893581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.893607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.893801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.893943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.893969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.894117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.894265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.894291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.894472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.894615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.894641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.894763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.894910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.894937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.895107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.895228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.895253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.895431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.895577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.895602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.895774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.895927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.895953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.896081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.896202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.896227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.896339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.896491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.896516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.896691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.896816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.896843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.896995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.897173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.897199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.897389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.897554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.897606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.897795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.897949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.897976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.898102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.898275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.898301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.898472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.898611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.898637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.898783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.898947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.898973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.899116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.899257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.899282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.899430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.899568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.899594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.899714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.899897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.899954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.900086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.900248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.900276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.900408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.900574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.900600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.900726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.900901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.900932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.901093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.901262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.901288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.446 qpair failed and we were unable to recover it. 00:26:54.446 [2024-07-13 06:22:00.901453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.446 [2024-07-13 06:22:00.901615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.901643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.901835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.901960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.901986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.902128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.902387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.902439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.902672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.902856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.902893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.903038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.903223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.903251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.903412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.903581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.903610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.903720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.903966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.903996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.904147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.904292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.904318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.904508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.904636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.904665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.904864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.905035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.905063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.905215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.905406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.905432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.905573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.905782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.905808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.905953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.906098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.906124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.906268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.906406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.906448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.906632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.906786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.906814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.906980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.907173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.907198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.907391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.907508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.907536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.907709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.907850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.907883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.908038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.908209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.908251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.908408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.908590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.908619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.908807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.908979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.909006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.909124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.909250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.909275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.909421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.909589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.909619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.909776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.909997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.910026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.910212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.910418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.910479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.910696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.910890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.910921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.911117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.911318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.911383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.911588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.911709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.911737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.911908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.912068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.912096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.912230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.912351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.912389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.912581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.912743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.447 [2024-07-13 06:22:00.912772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.447 qpair failed and we were unable to recover it. 00:26:54.447 [2024-07-13 06:22:00.912950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-07-13 06:22:00.913110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-07-13 06:22:00.913141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-07-13 06:22:00.913269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-07-13 06:22:00.913396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-07-13 06:22:00.913425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-07-13 06:22:00.913623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-07-13 06:22:00.913771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-07-13 06:22:00.913796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-07-13 06:22:00.913922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-07-13 06:22:00.914081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-07-13 06:22:00.914122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.448 [2024-07-13 06:22:00.914313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-07-13 06:22:00.914475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.448 [2024-07-13 06:22:00.914504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.448 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.914672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.914823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.914848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.915044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.915169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.915205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.915404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.915598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.915633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.915772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.915978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.916013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.916166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.916344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.916382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.916592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.916745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.916784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.916956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.917143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.917177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.917364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.917603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.917661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.917834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.917980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.918010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.918211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.918337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.918381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.918546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.918715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.918744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.918904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.919039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.919068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.919252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.919380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.919409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.919602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.919770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.919800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.919961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.920131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.920157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.920296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.920492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.920520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.920680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.920878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.920904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.921053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.921202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.921246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.921430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.921615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.921644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.921830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.921970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.922001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.922165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.922336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.922366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.922533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.922652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.722 [2024-07-13 06:22:00.922694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.722 qpair failed and we were unable to recover it. 00:26:54.722 [2024-07-13 06:22:00.922860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.923038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.923063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.923291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.923429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.923454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.923605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.923733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.923761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.923942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.924089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.924132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.924290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.924457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.924483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.924622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.924823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.924851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.925018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.925177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.925208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.925403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.925597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.925643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.925817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.925991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.926034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.926246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.926552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.926602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.926789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.926962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.926988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.927109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.927256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.927282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.927443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.927563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.927591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.927719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.927913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.927939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.928096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.928254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.928283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.928448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.928594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.928635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.928821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.928983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.929009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.929197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.929374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.929403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.929553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.929714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.929742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.929893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.930040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.930066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.930267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.930437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.930462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.930576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.930712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.930737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.930877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.931062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.931091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.931227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.931341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.931367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.931531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.931684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.931712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.931885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.932031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.932057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.932231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.932374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.932400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.932545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.932660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.932686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.932858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.933009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.933038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.933236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.933390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.933416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.723 [2024-07-13 06:22:00.933609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.933766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.723 [2024-07-13 06:22:00.933794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.723 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.933958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.934161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.934221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.934407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.934540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.934568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.934751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.934920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.934946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.935093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.935249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.935277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.935469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.935631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.935659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.935831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.936020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.936046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.936180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.936341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.936370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.936526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.936708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.936736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.936938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.937078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.937123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.937277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.937458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.937521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.937688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.937858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.937891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.938088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.938240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.938308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.938437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.938604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.938630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.938803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.938988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.939017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.939156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.939409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.939461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.939615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.939796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.939824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.940025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.940150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.940176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.940314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.940512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.940538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.940728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.940895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.940926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.941044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.941190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.941216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.941361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.941498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.941540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.941756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.941895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.941937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.942088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.942223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.942252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.942404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.942558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.942586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.942778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.942937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.942966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.943151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.943367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.943425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.943579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.943735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.943764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.943941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.944118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.944144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.944352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.944548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.944594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.944775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.944944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.944971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.724 qpair failed and we were unable to recover it. 00:26:54.724 [2024-07-13 06:22:00.945133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.724 [2024-07-13 06:22:00.945278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.945303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.945449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.945572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.945598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.945714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.945859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.945890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.946055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.946185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.946213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.946525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.946717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.946745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.946935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.947072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.947098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.947248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.947438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.947467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.947621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.947807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.947836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.948003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.948204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.948229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.948352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.948465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.948491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.948635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.948821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.948850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.949069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.949243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.949269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.949391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.949562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.949588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.949760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.949945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.949975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.950121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.950262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.950287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.950445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.950569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.950597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.950758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.950922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.950948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.951099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.951258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.951288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.951461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.951606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.951648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.951816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.951992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.952034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.952276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.952550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.952601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.952768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.952969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.952995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.953134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.953277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.953303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.953441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.953589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.953615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.953753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.953880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.953906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.954079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.954252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.954280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.954450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.954590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.954631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.954791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.954928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.954955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.955174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.955461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.955508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.955695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.955878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.955907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.956043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.956184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.956210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.725 qpair failed and we were unable to recover it. 00:26:54.725 [2024-07-13 06:22:00.956378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.725 [2024-07-13 06:22:00.956530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.956556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.956744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.956902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.956931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.957091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.957243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.957271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.957464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.957657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.957686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.957880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.958068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.958097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.958275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.958531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.958584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.958755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.958926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.958969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.959134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.959304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.959346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.959506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.959668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.959715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.959878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.960113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.960165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.960356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.960490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.960532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.960714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.960833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.960893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.961092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.961297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.961328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.961597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.961802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.961830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.962005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.962208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.962258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.962430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.962616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.962645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.962829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.963002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.963032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.963223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.963369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.963411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.963612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.963758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.963783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.963974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.964118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.964160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.964313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.964433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.964461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.964666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.964810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.964839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.965012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.965161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.965186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.965363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.965508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.965533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.965712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.965861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.965898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.726 qpair failed and we were unable to recover it. 00:26:54.726 [2024-07-13 06:22:00.966039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.726 [2024-07-13 06:22:00.966176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.966202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.966327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.966502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.966531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.966674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.966791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.966817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.966999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.967146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.967187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.967354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.967606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.967656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.967844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.968041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.968070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.968237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.968407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.968478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.968636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.968830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.968859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.969072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.969219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.969244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.969418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.969652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.969704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.969893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.970043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.970069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.970214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.970433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.970458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.970605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.970744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.970773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.970951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.971103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.971128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.971282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.971447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.971475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.971671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.971858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.971893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.972139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.972299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.972327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.972520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.972695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.972720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.972907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.973055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.973083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.973232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.973435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.973460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.973636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.973801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.973831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.974023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.974148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.974177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.974336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.974478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.974504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.974652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.974836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.974872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.975033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.975250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.975303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.975440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.975627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.975655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.975824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.975997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.976024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.976149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.976388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.976442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.976699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.976986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.977016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.977174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.977310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.977338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.977504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.977650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.727 [2024-07-13 06:22:00.977693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.727 qpair failed and we were unable to recover it. 00:26:54.727 [2024-07-13 06:22:00.977848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.977990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.978019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.978190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.978340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.978365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.978560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.978713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.978741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.978934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.979137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.979197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.979388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.979521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.979550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.979698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.979894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.979923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.980088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.980252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.980280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.980419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.980578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.980603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.980776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.980989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.981040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.981306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.981563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.981588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.981788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.981948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.981978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.982152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.982296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.982322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.982494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.982615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.982640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.982769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.982918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.982948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.983113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.983238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.983266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.983433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.983581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.983606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.983763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.983913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.983939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.984060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.984312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.984372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.984556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.984741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.984769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.984909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.985022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.985047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.985197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.985345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.985373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.985503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.985659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.985687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.985879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.986083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.986138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.986328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.986565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.986626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.986763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.986924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.986953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.987228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.987482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.987510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.987668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.987827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.987855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.988068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.988218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.988259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.988423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.988565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.988591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.988783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.988939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.988969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.989131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.989258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.728 [2024-07-13 06:22:00.989286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.728 qpair failed and we were unable to recover it. 00:26:54.728 [2024-07-13 06:22:00.989451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.989636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.989664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.989787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.989958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.989985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.990134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.990362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.990387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.990565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.990700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.990729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.990900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.991045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.991071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.991239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.991395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.991423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.991552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.991703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.991731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.991857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.992050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.992078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.992241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.992379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.992422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.992583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.992772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.992801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.992934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.993133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.993159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.993277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.993422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.993448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.993597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.993740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.993765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.993980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.994172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.994201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.994392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.994594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.994623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.994780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.994941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.994971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.995139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.995310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.995353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.995550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.995746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.995774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.995926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.996077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.996106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.996271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.996396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.996424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.996569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.996714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.996740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.996908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.997069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.997098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.997284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.997530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.997591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.997787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.997934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.997961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.998112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.998252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.998278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.998447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.998610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.998638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.998819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.999021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.999051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.999235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.999387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.999416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.999607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.999722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:00.999748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:00.999965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:01.000126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:01.000155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.729 qpair failed and we were unable to recover it. 00:26:54.729 [2024-07-13 06:22:01.000366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:01.000548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.729 [2024-07-13 06:22:01.000577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.000764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.000923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.000953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.001126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.001275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.001301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.001426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.001562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.001597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.001754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.001919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.001949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.002074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.002235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.002262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.002449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.002611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.002639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.002784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.002953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.002979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.003141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.003407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.003459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.003619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.003808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.003837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.004014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.004184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.004209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.004401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.004546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.004571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.004708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.004903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.004933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.005092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.005224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.005253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.005416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.005560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.005586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.005731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.005933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.005959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.006080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.006242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.006270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.006389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.006539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.006567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.006754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.006955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.006981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.007172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.007362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.007390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.007548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.007705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.007734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.007920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.008061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.008090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.008234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.008380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.008405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.008578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.008740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.008768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.008938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.009102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.009131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.009254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.009414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.009444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.730 qpair failed and we were unable to recover it. 00:26:54.730 [2024-07-13 06:22:01.009614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.009755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.730 [2024-07-13 06:22:01.009781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.009947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.010131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.010160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.010347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.010636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.010688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.010846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.011040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.011066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.011212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.011337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.011363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.011579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.011725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.011751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.011864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.012014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.012040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.012176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.012369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.012398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.012574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.012723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.012749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.012920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.013081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.013110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.013273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.013461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.013490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.013650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.013817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.013843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.013990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.014112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.014138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.014303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.014496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.014524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.014695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.014839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.014870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.015048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.015174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.015203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.015367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.015493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.015519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.015663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.015804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.015830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.015949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.016070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.016096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.016247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.016409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.016438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.016579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.016724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.016750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.016901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.017068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.017097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.017343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.017674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.017725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.017889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.018041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.018070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.018240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.018427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.018455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.018605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.018776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.018802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.018976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.019125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.019153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.019282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.019471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.019500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.019690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.019847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.019887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.020077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.020252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.020278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.020478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.020631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.020659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.731 qpair failed and we were unable to recover it. 00:26:54.731 [2024-07-13 06:22:01.020817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.731 [2024-07-13 06:22:01.021020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.021050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.021216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.021340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.021366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.021515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.021679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.021709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.021829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.021986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.022013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.022163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.022282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.022308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.022453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.022625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.022667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.022822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.023016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.023045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.023162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.023300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.023329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.023455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.023637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.023665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.023820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.024010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.024039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.024191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.024348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.024376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.024612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.024784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.024828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.024974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.025133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.025161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.025324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.025494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.025519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.025708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.025882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.025908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.026064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.026214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.026242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.026374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.026495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.026524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.026694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.026861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.026893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.027070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.027247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.027295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.027482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.027673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.027728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.027918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.028104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.028130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.028298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.028435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.028460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.028629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.028790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.028820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.028980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.029164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.029192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.029376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.029561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.029615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.029762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.029935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.029978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.030172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.030319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.030345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.030521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.030680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.030708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.030879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.031047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.031073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.031212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.031361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.031403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.031538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.031721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.732 [2024-07-13 06:22:01.031749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.732 qpair failed and we were unable to recover it. 00:26:54.732 [2024-07-13 06:22:01.031883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.032049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.032075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.032185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.032347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.032375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.032566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.032724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.032753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.032894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.033070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.033097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.033293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.033580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.033632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.033816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.034005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.034035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.034167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.034290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.034315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.034487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.034675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.034704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.034835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.035016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.035042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.035211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.035464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.035514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.035689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.035883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.035912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.036052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.036235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.036264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.036532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.036749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.036778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.036942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.037113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.037140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.037320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.037509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.037537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.037688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.037897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.037923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.038044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.038204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.038232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.038395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.038557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.038589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.038749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.038891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.038917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.039089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.039229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.039255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.039403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.039575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.039604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.039769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.039955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.039984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.040172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.040315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.040359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.040529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.040687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.040715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.040914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.041052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.041080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.041204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.041367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.041396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.041543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.041712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.041738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.041873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.042076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.042106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.042247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.042387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.042416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.042566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.042750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.042779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.042936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.043056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.733 [2024-07-13 06:22:01.043083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.733 qpair failed and we were unable to recover it. 00:26:54.733 [2024-07-13 06:22:01.043241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.043384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.043410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.043595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.043765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.043790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.043951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.044119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.044148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.044290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.044436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.044462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.044601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.044756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.044784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.044927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.045108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.045134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.045271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.045419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.045446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.045571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.045688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.045713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.045862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.045989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.046015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.046186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.046340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.046369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.046519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.046682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.046711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.046846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.047031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.047057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.047187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.047368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.047397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.047601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.047768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.047797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.047973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.048097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.048123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.048259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.048408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.048450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.048610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.048773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.048803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.048976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.049107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.049133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.049331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.049534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.049580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.049724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.049860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.049892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.050062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.050185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.050213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.050368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.050562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.050588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.050710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.050829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.050855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.050983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.051096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.051122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.051271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.051476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.051505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.051691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.051803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.051828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.051994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.052117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.052143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.052269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.052422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.052463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.052662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.052783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.052809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.052934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.053073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.053101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.053268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.053441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.734 [2024-07-13 06:22:01.053467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.734 qpair failed and we were unable to recover it. 00:26:54.734 [2024-07-13 06:22:01.053583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.053729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.053754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.053982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.054126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.054151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.054350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.054533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.054561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.054756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.054917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.054943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.055062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.055212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.055238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.055437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.055566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.055595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.055788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.055968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.055994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.056176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.056399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.056445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.056589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.056739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.056781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.056977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.057102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.057128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.057273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.057432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.057480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.057615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.057806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.057832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.057962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.058082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.058107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.058228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.058399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.058424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.058596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.058754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.058782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.058958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.059105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.059131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.059334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.059480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.059527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.059685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.059807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.059835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.059992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.060119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.060147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.060333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.060508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.060536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.060697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.060860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.060925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.061098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.061225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.061250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.061440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.061599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.061627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.061790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.061962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.061992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.062139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.062307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.062333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.735 [2024-07-13 06:22:01.062504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.062648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.735 [2024-07-13 06:22:01.062679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.735 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.062811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.062946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.062976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.063113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.063301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.063329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.063490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.063636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.063661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.063843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.063996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.064025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.064148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.064339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.064365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.064512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.065086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.065120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.065344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.065494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.065520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.065642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.065803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.065832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.066017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.066153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.066187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.066349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.066537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.066563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.066740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.066917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.066946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.067127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.067282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.067308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.067454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.067630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.067656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.067802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.067973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.068003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.068150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.068301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.068343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.068507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.068661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.068690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.068813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.068976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.069002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.069128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.069308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.069336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.069464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.069635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.069660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.069806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.069999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.070025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.070156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.070403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.070432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.070575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.070712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.070738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.070891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.071014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.071039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.071186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.071344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.071373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.071534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.071703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.071728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.071847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.072007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.072035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.072242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.072432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.072460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.072626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.072768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.072796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.072932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.073064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.073093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.073272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.073407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.073435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.736 qpair failed and we were unable to recover it. 00:26:54.736 [2024-07-13 06:22:01.073630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.073744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.736 [2024-07-13 06:22:01.073769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.073946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.074090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.074115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.074268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.074412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.074438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.074603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.074771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.074799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.074970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.075091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.075117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.075302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.075502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.075530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.075691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.075851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.075891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.076038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.076180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.076206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.076356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.076477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.076519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.076675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.076821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.076850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.077044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.077167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.077192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.077332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.077507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.077540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.077692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.077879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.077923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.078043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.078155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.078189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.078355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.078525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.078553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.078737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.078884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.078914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.079068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.079192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.079217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.079404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.079572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.079598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.079708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.079823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.079849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.080039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.080181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.080209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.080364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.080549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.080578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.080723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.080875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.080902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.081052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.081247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.081275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.081468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.081649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.081674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.081812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.081958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.082003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.082162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.082331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.082357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.082498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.082660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.082688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.082846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.082985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.083011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.083135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.083288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.083314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.083460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.083634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.083678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.083813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.083975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.084002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.737 qpair failed and we were unable to recover it. 00:26:54.737 [2024-07-13 06:22:01.084131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.737 [2024-07-13 06:22:01.084328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.084356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.084548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.084741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.084770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.084912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.085035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.085061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.085191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.085363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.085389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.085547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.085695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.085723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.085932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.086049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.086074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.086240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.086429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.086454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.086623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.086779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.086807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.086960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.087087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.087114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.087344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.087601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.087653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.087816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.087986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.088014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.088131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.088305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.088331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.088569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.088713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.088741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.088944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.089068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.089094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.089235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.089401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.089429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.089625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.089783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.089812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.089983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.090128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.090153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.090358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.090554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.090586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.090796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.090941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.090968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.091091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.091239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.091281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.091448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.091576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.091604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.091767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.091937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.091964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.092076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.092196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.092222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.092347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.092483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.092512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.092695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.092845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.092878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.093005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.093116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.093141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.093275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.093381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.093406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.093551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.093717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.093745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.093903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.094037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.094063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.094212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.094344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.094372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.094590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.094743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.738 [2024-07-13 06:22:01.094771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.738 qpair failed and we were unable to recover it. 00:26:54.738 [2024-07-13 06:22:01.094961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.095077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.095105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.095282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.095464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.095511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.095663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.095843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.095878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.096025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.096143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.096168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.096288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.096455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.096480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.096633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.096745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.096770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.096905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.097047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.097072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.097189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.097350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.097377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.097563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.097687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.097714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.097841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.098733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.098763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.098937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.099626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.099655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.099857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.100008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.100033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.100162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.100275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.100300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.100446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.100586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.100613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.100776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.100907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.100948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.101120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.101278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.101303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.101464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.101621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.101648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.101794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.101953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.101978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.102105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.102253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.102277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.102427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.102577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.102602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.102798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.102975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.103000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.103126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.103268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.103293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.103467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.103636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.103663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.103787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.103931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.103956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.104107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.104296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.104320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.104459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.104639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.104666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.739 [2024-07-13 06:22:01.104829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.104992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.739 [2024-07-13 06:22:01.105017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.739 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.105152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.105320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.105345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.105493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.105603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.105628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.105746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.105875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.105900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.106026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.106171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.106195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.106316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.106432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.106457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.106597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.106739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.106763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.106887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.107075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.107100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.107234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.107463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.107507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.107699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.107835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.107862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.108045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.108201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.108250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.108466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.108627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.108652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.108798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.108940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.108968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.109145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.109316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.109343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.109507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.109731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.109756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.109942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.110100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.110128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.110323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.110489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.110514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.110658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.110799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.110823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.110959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.111077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.111102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.111245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.111407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.111435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.111590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.111716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.111743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.111881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.112003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.112030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.112222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.112409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.112436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.112618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.112814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.112838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.112967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.113105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.113132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.113323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.113471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.113504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.113665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.113817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.113842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.114034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.114243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.114271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.114533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.114718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.114743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.114892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.115014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.115040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.115231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.115396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.115424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.740 qpair failed and we were unable to recover it. 00:26:54.740 [2024-07-13 06:22:01.115638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.115762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.740 [2024-07-13 06:22:01.115789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.115950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.116073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.116098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.116255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.116397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.116424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.116617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.116792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.116819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.116995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.117141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.117170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.117331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.117516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.117543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.117674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.117828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.117855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.118036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.118155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.118197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.118327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.118451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.118478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.118639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.118781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.118808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.118965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.119087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.119112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.119232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.119427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.119454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.119639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.119790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.119817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.119975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.120121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.120146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.120296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.120434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.120459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.120620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.120761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.120785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.120911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.121031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.121057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.121179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.121303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.121327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.121497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.121659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.121691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.121819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.121968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.121993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.122144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.122315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.122356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.122521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.122648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.122675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.122856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.123059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.123085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.123242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.123430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.123458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.123603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.123787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.123814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.123972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.124083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.124108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.124224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.124401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.124427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.124540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.124709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.124736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.124880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.124993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.125018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.125134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.125296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.125323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.125455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.125610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.125634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.741 qpair failed and we were unable to recover it. 00:26:54.741 [2024-07-13 06:22:01.125830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.741 [2024-07-13 06:22:01.125951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.125976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.126126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.126936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.126965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.127141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.127423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.127451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.127630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.127796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.127820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.127934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.128057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.128083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.128225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.128413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.128441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.128580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.128766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.128792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.128962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.129118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.129146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.129334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.129517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.129544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.129675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.129791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.129816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.129980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.130690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.130719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.130910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.131034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.131060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.131202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.131318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.131345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.131477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.131622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.131647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.131823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.132010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.132039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.132195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.132338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.132363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.132538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.132672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.132700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.132833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.133001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.133028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.133170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.133303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.133330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.133479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.133638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.133665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.133836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.133968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.133994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.134108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.134293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.134321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.134516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.134685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.134709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.134884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.135087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.135115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.135242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.135505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.135554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.135749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.135908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.135937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.136087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.136833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.136881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.137071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.137824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.137856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.138039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.138224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.138250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.138408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.138554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.138580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.138709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.138840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.138881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.742 qpair failed and we were unable to recover it. 00:26:54.742 [2024-07-13 06:22:01.139027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.742 [2024-07-13 06:22:01.139172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.139199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.139330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.139528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.139555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.139721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.139873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.139916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.140037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.140158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.140182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.140337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.140500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.140528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.140652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.140806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.140833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.141016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.141128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.141153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.141311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.141510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.141538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.141689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.141829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.141854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.141981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.142096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.142121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.142309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.142470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.142498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.142703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.142822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.142849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.143050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.143160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.143185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.143306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.143475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.143521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.143665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.143819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.143846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.144079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.144252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.144277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.144465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.144675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.144703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.144860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.145036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.145062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.145234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.145397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.145424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.145612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.145781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.145806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.145924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.146061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.146085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.146218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.146443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.146488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.146675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.146883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.146927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.147111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.147308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.147353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x151d9f0 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.147549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.147750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.147778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.147944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.148143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.148172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.743 [2024-07-13 06:22:01.148325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.149058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.743 [2024-07-13 06:22:01.149088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.743 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.149301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.149487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.149512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.149686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.149834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.149859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.150033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.150171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.150201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.150367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.150492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.150520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.150762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1235984 Killed "${NVMF_APP[@]}" "$@" 00:26:54.744 [2024-07-13 06:22:01.150934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.150960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.151082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.151219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 06:22:01 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:26:54.744 [2024-07-13 06:22:01.151248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 06:22:01 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:54.744 [2024-07-13 06:22:01.151413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 06:22:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:54.744 [2024-07-13 06:22:01.151575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.151603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 06:22:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:54.744 [2024-07-13 06:22:01.151748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 06:22:01 -- common/autotest_common.sh@10 -- # set +x 00:26:54.744 [2024-07-13 06:22:01.151925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.151951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.152077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.152233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.152258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.152413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.152607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.152635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.152814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.152934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.152960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.153089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.153252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.153279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.153448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.153642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.153671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.153855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.154028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.154055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.154218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.154437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.154464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.154636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.154757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.154782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.154941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.155097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.155124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 06:22:01 -- nvmf/common.sh@469 -- # nvmfpid=1236666 00:26:54.744 06:22:01 -- nvmf/common.sh@470 -- # waitforlisten 1236666 00:26:54.744 [2024-07-13 06:22:01.155334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 06:22:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:54.744 [2024-07-13 06:22:01.155520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.155548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 06:22:01 -- common/autotest_common.sh@819 -- # '[' -z 1236666 ']' 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 06:22:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.744 [2024-07-13 06:22:01.155712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 06:22:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:54.744 [2024-07-13 06:22:01.155886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.155910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 06:22:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.744 06:22:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:54.744 [2024-07-13 06:22:01.156054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 06:22:01 -- common/autotest_common.sh@10 -- # set +x 00:26:54.744 [2024-07-13 06:22:01.156206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.156233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.156443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.156580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.156604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.156734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.156878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.156903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.157029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.157162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.157187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.157332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.157482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.157515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.157667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.157804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.157828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.157984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.158138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.158165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.744 [2024-07-13 06:22:01.158328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.158465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.744 [2024-07-13 06:22:01.158490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.744 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.158611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.158766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.158791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.158932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.159079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.159106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.159218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.159335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.159361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.159505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.159666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.159692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.159828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.159986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.160014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.160168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.160356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.160384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.160575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.160697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.160723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.160877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.161068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.161094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.161250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.161408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.161433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.161605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.161751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.161776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.161968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.162116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.162140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.162292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.162423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.162447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.162567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.162716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.162741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.162888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.163008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.163033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.163157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.163279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.163303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.163435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.163548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.163573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.163695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.163839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.163863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.163992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.164139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.164165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.164338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.164469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.164497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.164630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.164748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.164776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.164946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.165088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.165112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.165261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.165399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.165427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.165568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.165724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.165751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.165897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.166018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.166042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.166163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.166355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.166383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.166577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.166751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.166776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.166899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.167066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.167093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.167239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.167425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.167453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.167592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.167735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.167760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.167876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.168025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.168050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.745 qpair failed and we were unable to recover it. 00:26:54.745 [2024-07-13 06:22:01.168222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.745 [2024-07-13 06:22:01.168371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.168395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.168554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.168695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.168720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.168844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.168989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.169018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.169205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.169378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.169405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.169566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.169702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.169726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.169882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.170034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.170058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.170182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.170339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.170364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.170513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.170687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.170712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.170881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.171042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.171070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.171207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.171348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.171375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.171551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.171695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.171720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.171875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.172038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.172066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.172262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.172445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.172472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.172645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.172794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.172820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.173017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.173175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.173202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.173415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.173591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.173633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.173751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.173877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.173919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.174109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.174303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.174335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.174554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.174716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.174741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.174940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.175095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.175123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.175320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.175479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.175507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.175708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.175897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.175927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.176073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.176224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.176248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.176384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.176503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.176527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.176665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.176815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.176848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.176977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.177127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.177152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.177300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.177437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.177462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.177617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.177786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.177818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.177970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.178112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.178139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.178292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.178409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.178433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.746 qpair failed and we were unable to recover it. 00:26:54.746 [2024-07-13 06:22:01.178577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.746 [2024-07-13 06:22:01.178732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.178756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.178919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.179037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.179062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.179178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.179306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.179330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.179493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.179663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.179689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.179837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.179965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.179990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.180109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.180256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.180281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.180425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.180546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.180570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.180720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.180870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.180898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.181015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.181158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.181189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.181326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.181441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.181466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.181616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.181761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.181786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.181937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.182071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.182095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.182246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.182372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.182398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.182544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.182715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.182741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.182914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.183080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.183104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.183281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.183462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.183487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.183637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.183756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.183780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.183928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.184069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.184098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.184272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.184417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.184443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.184593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.184734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.184759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.184921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.185063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.185089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.185217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.185366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.185391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.185535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.185683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.185708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.185822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.185952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.185978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.186103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.186261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.186286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.186454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.186569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.186594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.186736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.186888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.186914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.187055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.187197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.187222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.187375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.187493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.187518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.187693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.187835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.187860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.187975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.188123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.747 [2024-07-13 06:22:01.188149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.747 qpair failed and we were unable to recover it. 00:26:54.747 [2024-07-13 06:22:01.188270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.188415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.188441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.188587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.188749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.188777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.188934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.189082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.189107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.189252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.189398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.189423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.189576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.189717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.189742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.189918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.190089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.190114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.190231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.190383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.190410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.190588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.190730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.190755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.190876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.191017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.191042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.191174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.191345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.191370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.191542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.191690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.191715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.191862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.191979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.192003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.192121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.192271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.192298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.192448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.192557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.192582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.192700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.192872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.192916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.193078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.193221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.193246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.193419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.193563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.193588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.193768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.193923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.193949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.194073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.194189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.194214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.194342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.194510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.194535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.194702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.194850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.194882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.195003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.195145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.195170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.195314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.195427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.195452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.195569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.195714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.195738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.195889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.196033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.196058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.196179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.196299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.196323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.748 qpair failed and we were unable to recover it. 00:26:54.748 [2024-07-13 06:22:01.196464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.748 [2024-07-13 06:22:01.196632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.196657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.196808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.197000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.197025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.197157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.197301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.197326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.197473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.197644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.197669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.197830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.197972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.197998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.198166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.198327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.198351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.198529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.198732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.198760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.198933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.198934] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:54.749 [2024-07-13 06:22:01.198995] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.749 [2024-07-13 06:22:01.199055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.199080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.199192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.199308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.199332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.199477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.199625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.199649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.199836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.200003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.200035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.200240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.200456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.200486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.200687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.200816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.200843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.201004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.201144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.201172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.201398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.201610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.201638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.201789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.201971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.201997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.202165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.202329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.202357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.202526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.202692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.202717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.202911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.203079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.203120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.203348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.203536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.203564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.203738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.203885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.203927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.204116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.204278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.204308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.204501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.205891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.205935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.206109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.206305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.206333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.206527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.206715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.206740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.206896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.207097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.207124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.207331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.207486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.207513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.207681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.207822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.207846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.208005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.208174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.208199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.208388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.208598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.208626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.749 qpair failed and we were unable to recover it. 00:26:54.749 [2024-07-13 06:22:01.210879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.749 [2024-07-13 06:22:01.211080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.211105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.750 qpair failed and we were unable to recover it. 00:26:54.750 [2024-07-13 06:22:01.211291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.211435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.211459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.750 qpair failed and we were unable to recover it. 00:26:54.750 [2024-07-13 06:22:01.211631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.211753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.211777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.750 qpair failed and we were unable to recover it. 00:26:54.750 [2024-07-13 06:22:01.211917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.212071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.212097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.750 qpair failed and we were unable to recover it. 00:26:54.750 [2024-07-13 06:22:01.212270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.212436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.212463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.750 qpair failed and we were unable to recover it. 00:26:54.750 [2024-07-13 06:22:01.212615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.212793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.212817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.750 qpair failed and we were unable to recover it. 00:26:54.750 [2024-07-13 06:22:01.212963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.213133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.213157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.750 qpair failed and we were unable to recover it. 00:26:54.750 [2024-07-13 06:22:01.213327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.213471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.213495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.750 qpair failed and we were unable to recover it. 00:26:54.750 [2024-07-13 06:22:01.213616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.213759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.213783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.750 qpair failed and we were unable to recover it. 00:26:54.750 [2024-07-13 06:22:01.213908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.214029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.214054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.750 qpair failed and we were unable to recover it. 00:26:54.750 [2024-07-13 06:22:01.214206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.214365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.214392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.750 qpair failed and we were unable to recover it. 00:26:54.750 [2024-07-13 06:22:01.214544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.214698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.214723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.750 qpair failed and we were unable to recover it. 00:26:54.750 [2024-07-13 06:22:01.214851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.214977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.215002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.750 qpair failed and we were unable to recover it. 00:26:54.750 [2024-07-13 06:22:01.215137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.215280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.215305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.750 qpair failed and we were unable to recover it. 00:26:54.750 [2024-07-13 06:22:01.215421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.216393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.216432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.750 qpair failed and we were unable to recover it. 00:26:54.750 [2024-07-13 06:22:01.216669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.216862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.750 [2024-07-13 06:22:01.216911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:54.750 qpair failed and we were unable to recover it. 00:26:55.023 [2024-07-13 06:22:01.217097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.217232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.217259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.023 qpair failed and we were unable to recover it. 00:26:55.023 [2024-07-13 06:22:01.217419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.217571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.217597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.023 qpair failed and we were unable to recover it. 00:26:55.023 [2024-07-13 06:22:01.217791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.217909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.217935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.023 qpair failed and we were unable to recover it. 00:26:55.023 [2024-07-13 06:22:01.218090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.220884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.220917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.023 qpair failed and we were unable to recover it. 00:26:55.023 [2024-07-13 06:22:01.221129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.221316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.221341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.023 qpair failed and we were unable to recover it. 00:26:55.023 [2024-07-13 06:22:01.221524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.221674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.221699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.023 qpair failed and we were unable to recover it. 00:26:55.023 [2024-07-13 06:22:01.221852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.222010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.222036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.023 qpair failed and we were unable to recover it. 00:26:55.023 [2024-07-13 06:22:01.222215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.222358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.222383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.023 qpair failed and we were unable to recover it. 00:26:55.023 [2024-07-13 06:22:01.222519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.222675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.222699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.023 qpair failed and we were unable to recover it. 00:26:55.023 [2024-07-13 06:22:01.222881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.223035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.223061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.023 qpair failed and we were unable to recover it. 00:26:55.023 [2024-07-13 06:22:01.223237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.023 [2024-07-13 06:22:01.223419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.223444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.223592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.223749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.223774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.223929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.224076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.224100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.224281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.224456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.224480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.224636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.224789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.224814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.225005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.225135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.225159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.225341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.225484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.225509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.225635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.225814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.225839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.226038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.226194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.226219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.226363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.226571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.226600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.226793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.226963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.226989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.227141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.229907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.229940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.230173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.230332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.230357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.230510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.230687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.230712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.230882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.231064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.231094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.231213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.231388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.231413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.231575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.231729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.231754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.231935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.232083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.232108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.232259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.232405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.232429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.232578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.232861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.232909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.233138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.233295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.233320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.233449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.233624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.233648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.233797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.233966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.233992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.234150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.234368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.234393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.234609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.234759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.234789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.234953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.235094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.235122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.235261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.235420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.235447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.235617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.235766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.235792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.235940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.236105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.236133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.238880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.239046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.239070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.239228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.239386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.239412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.239618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.239784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.239826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.239990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.240229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.240269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.240425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.240575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.240599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.240750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.240871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.240903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.241060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.241230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.241255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.241427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.241549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.241576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.241754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.241913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.241940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.242110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.242261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.242288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.242491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.242678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.242704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.242853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.243011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.243037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.243194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.243315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.243339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.243476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.243648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.243677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.243944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.244160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.244189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.244381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.244542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.244570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.244832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.245021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.245047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.245233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.245380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.245405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.248886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.249071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.249097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.249248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.249415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.249457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.249705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.249930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.249956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.250132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.250297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.250341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.250573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.250732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.250759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.250916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.251157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.251183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.251306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.251464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.251489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.251627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.251827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.251853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.252002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.252172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.252199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.252352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.252496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.252521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.024 qpair failed and we were unable to recover it. 00:26:55.024 [2024-07-13 06:22:01.252753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.252927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.024 [2024-07-13 06:22:01.252954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.253109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.253343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.253369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.253490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.253716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.253740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.253883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.254035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.254059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.254218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.254381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.254407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.254638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.254788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.254813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.254980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.255208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.255248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.255453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.255645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.255674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.255940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.256132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.256163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.256351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.256517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.256545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.256763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.256935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.256964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.257152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.257343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.257372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.258884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.259037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.259064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.259207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.259385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.259413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.259583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.259774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.259799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.259958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.260163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.260192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.260407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.260601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.260630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.260776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.260997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.261027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.261225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.261436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.261464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.261611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.025 [2024-07-13 06:22:01.263881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.263928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.264114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.264261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.264291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.264486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.264700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.264730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.264901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.265048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.265077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.265287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.265497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.265526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.265710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.265903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.265933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.266135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.266365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.266394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.266581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.266757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.266783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.266964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.267153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.267192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.267390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.267584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.267615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.267785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.267930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.267959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.268122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.268345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.268375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.268563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.268727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.268753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.268917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.269104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.269132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.269295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.269469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.269496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.269626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.269853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.269938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.270070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.270186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.270212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.270333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.270480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.270506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.270618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.270737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.270764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.273878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.274033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.274060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.274229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.274378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.274404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.274536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.274660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.274686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.274873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.275017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.275043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.275176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.275322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.275348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.275502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.275658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.275683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.275833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.275992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.276018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.276147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.276295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.276320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.276440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.276589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.276614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.276764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.276919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.276944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.277067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.277247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.277272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.277420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.277536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.277561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.277687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.277832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.277857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.278237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.278391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.278417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.278557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.278697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.278722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.278879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.279060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.279085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.279254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.279425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.279450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.279593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.279736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.279760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.279883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.280007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.280032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.280207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.280350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.280375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.025 [2024-07-13 06:22:01.280565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.280712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.025 [2024-07-13 06:22:01.280737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.025 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.280889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.283879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.283908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.284065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.284234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.284260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.284413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.284532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.284557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.284720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.284880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.284906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.285065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.285187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.285213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.285335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.285480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.285506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.285636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.285758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.285783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.285908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.286056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.286081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.286236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.286413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.286438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.286564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.286709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.286733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.286923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.287039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.287065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.287191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.287312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.287338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.287496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.287658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.287699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.287878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.288015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.288043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.288168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.288313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.288338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.288487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.288623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.288648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.288775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.288936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.288963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.289114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.289247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.289271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.289421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.289570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.289596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d84000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.289774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.289934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.289965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.290089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.290237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.290263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.290380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.290548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.290573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.290717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.290853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.290885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.291029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.291153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.291191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.291338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.291485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.291510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.291679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.291801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.291826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.291963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.292112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.292139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.292306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.292452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.292478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.292602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.292716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.292742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.292871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.292989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.293015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.293152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.293266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.293291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.293415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.293568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.293593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.293720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.293855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.293888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.293998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.294114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.294140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.294277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.294391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.294416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.294577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.294716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.294741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.294900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.295045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.295073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.295198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.295361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.295387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.295562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.295701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.295726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.026 qpair failed and we were unable to recover it. 00:26:55.026 [2024-07-13 06:22:01.295842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.295974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.026 [2024-07-13 06:22:01.296000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.296115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.296230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.296255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.296401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.296508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.296533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.296670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.296785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.296809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.296976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.297095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.297120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.297303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.297464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.297490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.297604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.297716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.297742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.297886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.298035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.298060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.298205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.298325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.298350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.298500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.298676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.298702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.298824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.298990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.299017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.299164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.299304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.299329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.299446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.299560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.299585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.299761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.299879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.299905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.300025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.300148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.300178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.300325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.300444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.300469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.300584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.300733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:55.027 [2024-07-13 06:22:01.300758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.300782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.300913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.301030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.301056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.301170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.301347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.301372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.301539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.301683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.301708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.301829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.302003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.302029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.302208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.302350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.302376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.302522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.302644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.302669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.302815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.302967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.302993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.303168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.303366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.303391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.303537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.303681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.303706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.303820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.304008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.304034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.304172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.304284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.304309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.304453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.304594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.304618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.304735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.304883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.304909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.305053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.305197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.305222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.305338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.305460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.305486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.305612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.305761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.305786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.305941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.306063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.306090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.306242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.306360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.306385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.306545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.306715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.306740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.306884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.307003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.307029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.307149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.307267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.307294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.307469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.307585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.307611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.027 qpair failed and we were unable to recover it. 00:26:55.027 [2024-07-13 06:22:01.307780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.027 [2024-07-13 06:22:01.307960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.307986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.308132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.308341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.308366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.308516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.308663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.308688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.308799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.308956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.308982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.309131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.309287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.309313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.309484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.309649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.309674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.309805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.309964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.309990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.310137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.310282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.310307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.310446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.310563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.310587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.310701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.310815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.310840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.310999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.311175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.311200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.311321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.311499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.311525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.311674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.311789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.311815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.312031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.312143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.312168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.312340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.312483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.312508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.312615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.312729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.312754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.312874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.313078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.313103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.313255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.313433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.313459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.313607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.313730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.313755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.313900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.314031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.314056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.314229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.314374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.314399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.314518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.314675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.314700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.314882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.315031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.315056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.315176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.315345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.315370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.315515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.315672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.315697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.315837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.315990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.316015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.316136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.316287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.316312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.316432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.316546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.316571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.316722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.316832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.316857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.317007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.317156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.317192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.317362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.317508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.317533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.317681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.317846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.317883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.318030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.318172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.318197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.318317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.318442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.318468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.318620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.318766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.318792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.318920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.319071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.319098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.319215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.319359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.319385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.319535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.319653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.319678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.319807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.319951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.319977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.028 qpair failed and we were unable to recover it. 00:26:55.028 [2024-07-13 06:22:01.320119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.028 [2024-07-13 06:22:01.320260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.320285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.320401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.320551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.320576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.320692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.320800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.320829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.320965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.321089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.321115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.321234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.321412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.321437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.321607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.321756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.321781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.321896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.322039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.322064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.322195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.322337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.322362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.322506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.322651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.322676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.322785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.322932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.322958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.323092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.323248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.323273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.323420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.323539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.323564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.323685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.323854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.323889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.324036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.324156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.324181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.324324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.324463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.324489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.324602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.324721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.324746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.324896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.325065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.325091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.325214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.325365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.325391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.325563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.325711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.325738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.325881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.326028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.326053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.326172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.326323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.326348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.326468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.326612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.326643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.326790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.326936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.326967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.327117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.327233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.327258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.327400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.327549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.327575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.327726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.327837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.327862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.328012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.328158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.328183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.328328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.328472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.328499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.328640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.328792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.328818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.328950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.329090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.329116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.329297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.329475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.329500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.329642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.329762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.329787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.329911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.330060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.029 [2024-07-13 06:22:01.330086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.029 qpair failed and we were unable to recover it. 00:26:55.029 [2024-07-13 06:22:01.330261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.330439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.330464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.330619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.330763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.330789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.330933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.331056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.331082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.331226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.331396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.331422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.331568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.331739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.331764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.331914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.332040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.332066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.332216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.332365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.332391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.332533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.332678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.332703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.332828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.332981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.333008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.333159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.333315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.333340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.333517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.333634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.333661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.333807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.333953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.333979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.334103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.334248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.334273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.334392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.334507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.334532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.334674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.334783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.334808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.334946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.335089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.335115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.335236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.335376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.335401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.335553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.335663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.335688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.335842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.335981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.336008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.336127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.336291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.336316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.336458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.336583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.336609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.336753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.336901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.336927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.337084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.337264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.337289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.337430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.337577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.337602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.337749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.337923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.337948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.338064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.338186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.338211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.338333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.338476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.338501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.338630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.338773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.338799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.338962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.339107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.339133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.339280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.339390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.339416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.339540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.339680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.339706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.339813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.339939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.339965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.340117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.340267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.340292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.340435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.340568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.340595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.340708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.340855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.340887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.341034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.341160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.341185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.341330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.341469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.341493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.341609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.341754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.341779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.341893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.342036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.342061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.030 qpair failed and we were unable to recover it. 00:26:55.030 [2024-07-13 06:22:01.342202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.030 [2024-07-13 06:22:01.342314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.342340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.342501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.342646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.342673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.342823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.343006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.343033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.343145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.343294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.343319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.343458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.343600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.343625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.343768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.343888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.343914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.344089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.344222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.344247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.344369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.344479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.344505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.344620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.344760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.344784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.344959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.345100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.345125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.345287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.345406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.345431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.345580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.345702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.345729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.345845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.345999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.346025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.346153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.346273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.346299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.346478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.346599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.346624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.346735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.346876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.346902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.347030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.347167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.347193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.347307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.347453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.347478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.347636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.347751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.347776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.347902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.348049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.348074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.348227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.348369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.348395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.348558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.348705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.348730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.348849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.348971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.348997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.349109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.349253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.349279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.349434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.349583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.349609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.349766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.349885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.349911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.350033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.350178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.350204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.350362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.350479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.350504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.350617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.350788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.350813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.350964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.351103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.351128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.351282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.351390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.351416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.351559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.351688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.351713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.351858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.351984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.352009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.352157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.352314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.352340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.352489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.352598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.352623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.352780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.352904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.352930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.353097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.353207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.353232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.353378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.353496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.353521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.353639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.353783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.353808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.353960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.354105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.354130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.354267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.354440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.354465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.354592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.354732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.354757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.354907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.355043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.355068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.355248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.355365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.355390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.355533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.355684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.355709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.031 qpair failed and we were unable to recover it. 00:26:55.031 [2024-07-13 06:22:01.355894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.031 [2024-07-13 06:22:01.356017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.356042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.356167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.356311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.356337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.356484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.356605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.356630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.356739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.356881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.356907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.357030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.357165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.357190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.357339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.357460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.357486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.357663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.357811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.357836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.357991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.358140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.358165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.358333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.358474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.358499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.358620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.358742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.358768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.358941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.359065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.359090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.359214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.359359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.359386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.359538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.359683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.359709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.359859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.360035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.360070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.360213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.360356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.360381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.360537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.360658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.360684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.360826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.360995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.361032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.361181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.361323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.361359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.361474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.361620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.361646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.361794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.361919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.361945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.362064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.362211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.362236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.362398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.362511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.362536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.362688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.362828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.362854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.363015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.363167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.363192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.363341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.363488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.363514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.363649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.363763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.363790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.363942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.364090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.364120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.364251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.364366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.364391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.364541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.364657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.364682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.364801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.364953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.364979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.365157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.365277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.365304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.365422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.365570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.365596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.365723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.365898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.365924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.366050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.366195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.366220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.366362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.366483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.366515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.366636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.366782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.366808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.366963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.367112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.367142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.367263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.367406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.367431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.367557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.367678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.367704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.367848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.367993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.368019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.368207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.368325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.368350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.368501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.368646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.368671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.368843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.369001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.369029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.369181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.369302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.369327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.369470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.369614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.369639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.369792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.369913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.369940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.370089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.370238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.370267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.370384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.370494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.370519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.370701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.370876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.370902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.032 [2024-07-13 06:22:01.371077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.371222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.032 [2024-07-13 06:22:01.371247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.032 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.371403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.371529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.371553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.371709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.371821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.371846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.371969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.372083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.372109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.372283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.372427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.372453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.372634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.372775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.372800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.372915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.373062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.373088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.373238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.373358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.373388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.373531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.373707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.373732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.373853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.374007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.374033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.374206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.374357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.374382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.374491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.374672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.374696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.374810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.374984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.375010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.375151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.375305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.375330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.375452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.375607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.375632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.375779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.375931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.375957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.376127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.376301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.376326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.376443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.376565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.376591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.376715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.376830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.376855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.377013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.377129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.377155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.377332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.377486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.377511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.377627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.377770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.377796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.377944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.378064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.378089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.378214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.378337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.378362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.378493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.378607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.378633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.378793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.378917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.378943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.379090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.379219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.379245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.379362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.379499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.379524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.379675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.379814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.379838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.379981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.380163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.380188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.380309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.380453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.380478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.380612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.380755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.380780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.380926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.381070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.381095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.381219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.381366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.381391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.381535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.381652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.381677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.381805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.381949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.381974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.382121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.382247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.382272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.382419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.382563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.382588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.382741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.382855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.382886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.383032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.383144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.383169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.383318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.383462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.383487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.383632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.383786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.383811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.383941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.384080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.384106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.384285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.384408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.384433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.384557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.384678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.384702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.384845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.384993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.385018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.385168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.385339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.385363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.385492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.385667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.385692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.385875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.386017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.386042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.386190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.386356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.386381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.386550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.386677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.386701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.386859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.387015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.387040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.387153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.387297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.387322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.387442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.387593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.387618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.387739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.387891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.387917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.388094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.388209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.388234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.033 [2024-07-13 06:22:01.388405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.388543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.033 [2024-07-13 06:22:01.388567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.033 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.388685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.388802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.388827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.388970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.389110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.389135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.389273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.389402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.389426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.389581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.389722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.389747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.389905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.390050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.390076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.390199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.390345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.390370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.390512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.390626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.390651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.390774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.390925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.390952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.391135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.391269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.391294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.391439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.391588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.391613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.391762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.391888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.391914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.392037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.392208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.392233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.392400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.392552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.392577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.392720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.392874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.392900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.393031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.393174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.393200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.393374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.393522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.393548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.393671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.393820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.393844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.394001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.394141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.394166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.394292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.394404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.394429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.394585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.394741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.394766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.394939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.395056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.395081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.395233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.395345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.395371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.395516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.395667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.395692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.395812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.395952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.395977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.396162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.396292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.396318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.396491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.396640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.396665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.396783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.396938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.396963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.397085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.397212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.397237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.397384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.397499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.397524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.397695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.397805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.397830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.397978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.398098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.398123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.398277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.398433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.398459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.398602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.398739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.398765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.398885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.399000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.399025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.399173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.399322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.399347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.399530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.399690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.399714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.399836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.399984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.400009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.400124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.400268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.400293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.400465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.400573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.400598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.400742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.400888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.400914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.401066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.401185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.401209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.401329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.401479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.401503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.401621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.401762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.401786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.401964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.402086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.402112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.402234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.402354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.402379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.402522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.402667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.402693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.402832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.402994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.403020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.403177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.403315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.403340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.403483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.403595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.403620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.403740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.403924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.403950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.404090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.404210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.404236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.404363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.404522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.404547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.404694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.404842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.034 [2024-07-13 06:22:01.404884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.034 qpair failed and we were unable to recover it. 00:26:55.034 [2024-07-13 06:22:01.405015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.405160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.405185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.405357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.405480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.405505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.405651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.405802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.405827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.406007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.406180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.406205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.406356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.406511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.406537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.406708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.406825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.406850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.407020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.407198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.407223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.407371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.407484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.407509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.407659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.407837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.407862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.407981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.408139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.408164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.408316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.408484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.408509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.408648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.408769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.408794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.408967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.409107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.409132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.409272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.409402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.409427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.409544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.409664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.409689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.409834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.409960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.409986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.410101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.410252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.410277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.410426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.410585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.410610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.410723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.410888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.410914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.411029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.411159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.411184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.411331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.411451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.411476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.411601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.411719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.411745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.411874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.411997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.412022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.412171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.412317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.412342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.412481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.412652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.412676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.412799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.412980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.413007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.413152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.413277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.413303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.413425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.413535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.413560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.413732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.413908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.413938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.414088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.414258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.414283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.414458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.414602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.414628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.414741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.414857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.414889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.415037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.415158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.415184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.415347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.415492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.415518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.415651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.415824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.415849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.416005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.416151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.416175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.416320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.416461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.416487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.416626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.416772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.416798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.416948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.417066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.417096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.417243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.417418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.417443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.417562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.417705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.417731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.417848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.417994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.418020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.418138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.418254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.418279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.418451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.418565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.418591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.418741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.418888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.418914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.419032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.419178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.419203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.419351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.419462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.419486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.419597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.419737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.419762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.419874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.420003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.420034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.420163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.420283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.420309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.420431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.420549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.420574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.420687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.420773] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:55.035 [2024-07-13 06:22:01.420839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.420871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.420922] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.035 [2024-07-13 06:22:01.420943] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.035 [2024-07-13 06:22:01.420957] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.035 [2024-07-13 06:22:01.420983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.421024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:55.035 [2024-07-13 06:22:01.421099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.421124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.421056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:55.035 [2024-07-13 06:22:01.421081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:55.035 [2024-07-13 06:22:01.421083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:55.035 [2024-07-13 06:22:01.421248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.421385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.421412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.421535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.421643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.421668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.421811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.421931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.035 [2024-07-13 06:22:01.421957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.035 qpair failed and we were unable to recover it. 00:26:55.035 [2024-07-13 06:22:01.422112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.422242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.422272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.422437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.422609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.422634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.422781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.422895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.422928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.423074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.423191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.423217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.423329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.423470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.423495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.423632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.423816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.423840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.424019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.424157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.424181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.424296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.424422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.424447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.424555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.424722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.424746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.424861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.425019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.425044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.425187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.425305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.425334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.425446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.425565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.425590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.425734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.425843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.425875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.425990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.426097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.426122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.426237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.426380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.426405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.426516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.426663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.426688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.426800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.426946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.426972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.427088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.427196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.427221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.427339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.427488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.427513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.427620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.427736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.427761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.427903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.428052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.428085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.428227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.428339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.428364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.428483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.428618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.428643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.428753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.428890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.428916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.429062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.429175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.429200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.429307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.429423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.429447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.429577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.429717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.429742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.429933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.430137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.430162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.430286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.430475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.430500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.430649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.430777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.430803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.430939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.431050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.431075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.431197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.431366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.431391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.431534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.431680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.431705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.431886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.432037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.432062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.432188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.432325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.432350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.432462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.432614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.432640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.432824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.432974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.432999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.433144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.433260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.433285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.433431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.433573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.433598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.433743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.433871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.433896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.434075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.434239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.434264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.434480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.434596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.434621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.434762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.434941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.434967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.435080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.435203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.435229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.435357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.435499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.435524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.435643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.435756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.435782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.435926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.436040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.436065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.436181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.436296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.436320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.436440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.436586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.436611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.436755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.436876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.436902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.437030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.437142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.437167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.437292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.437432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.437457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.437576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.437719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.437744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.437889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.438008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.438034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.438181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.438333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.438358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.438480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.438635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.438660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.438806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.438924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.438950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.439096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.439264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.439287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.036 qpair failed and we were unable to recover it. 00:26:55.036 [2024-07-13 06:22:01.439430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.439573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.036 [2024-07-13 06:22:01.439597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.439736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.439876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.439901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.440044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.440170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.440194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.440318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.440462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.440487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.440597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.440725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.440750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.440898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.441018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.441043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.441191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.441337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.441362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.441511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.441632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.441659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.441768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.441915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.441941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.442057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.442201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.442225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.442355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.442502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.442527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.442641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.442758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.442784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.442909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.443029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.443056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.443222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.443338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.443363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.443512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.443637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.443662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.443812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.443937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.443963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.444078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.444225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.444251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.444394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.444508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.444533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.444640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.444761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.444786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.444915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.445060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.445085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.445222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.445336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.445361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.445480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.445616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.445641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.445775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.445894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.445920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.446034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.446161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.446186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.446310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.446430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.446455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.446597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.446712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.446737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.446909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.447052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.447077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.447232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.447349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.447375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.447506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.447675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.447701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.447846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.447976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.448001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.448126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.448240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.448265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.448388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.448505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.448531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.448647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.448791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.448817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.448936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.449054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.449079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.449201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.449307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.449332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.449511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.449633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.449660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.449831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.449959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.449985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.450105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.450226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.450252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.450366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.450479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.450504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.450621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.450812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.450837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.450969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.451079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.451104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.451232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.451343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.451369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.451488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.451603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.451628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.451780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.451909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.451936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.452085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.452203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.452228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.452338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.452479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.452504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.452690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.452798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.452823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.452946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.453093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.453118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.453231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.453373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.453398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.453511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.453647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.453672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.453809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.453989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.454015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.454160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.454279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.454304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.454425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.454539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.454566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.454712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.454873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.454899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.455013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.455128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.037 [2024-07-13 06:22:01.455153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.037 qpair failed and we were unable to recover it. 00:26:55.037 [2024-07-13 06:22:01.455286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.455399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.455424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.455564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.455684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.455709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.455817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.455933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.455959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.456072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.456214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.456239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.456360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.456504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.456529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.456673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.456780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.456806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.456930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.457082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.457107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.457222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.457332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.457357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.457466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.457579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.457604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.457790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.457931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.457958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.458076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.458214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.458240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.458357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.458503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.458528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.458672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.458811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.458837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.458989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.459108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.459133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.459336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.459455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.459480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.459620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.459769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.459794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.459917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.460058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.460083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.460196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.460336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.460361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.460501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.460651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.460676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.460783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.460902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.460927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.461080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.461227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.461252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.461393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.461520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.461544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.461653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.461797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.461822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.461940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.462083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.462108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.462219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.462357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.462381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.462505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.462629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.462654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.462804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.462945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.462971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.463112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.463227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.463252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.463402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.463591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.463616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.463732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.463855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.463889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.464002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.464139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.464164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.464274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.464386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.464411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.464533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.464649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.464674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.464781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.464964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.464990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.465164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.465287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.465314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.465457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.465579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.465604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.465752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.465874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.465901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.466027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.466143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.466168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.466328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.466473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.466503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.466628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.466793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.466817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.466941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.467080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.467105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.467251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.467369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.467394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.467510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.467633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.467658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.467787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.467938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.467964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.468080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.468196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.468221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.468333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.468459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.468484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.468636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.468754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.468781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.468903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.469015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.469040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.469197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.469343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.469373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.469498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.469640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.469665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.469788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.469913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.469938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.470061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.470184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.470208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.470331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.470516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.470541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.470663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.470769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.470794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.470919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.471064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.471089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.471200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.471313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.471338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.471481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.471619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.471644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.471785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.471909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.038 [2024-07-13 06:22:01.471935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.038 qpair failed and we were unable to recover it. 00:26:55.038 [2024-07-13 06:22:01.472059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.472172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.472205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.472328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.472468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.472494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.472670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.472813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.472838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.472964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.473092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.473118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.473231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.473343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.473368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.473514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.473659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.473685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.473860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.473983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.474008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.474128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.474267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.474292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.474410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.474552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.474577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.474698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.474882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.474908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.475033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.475155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.475184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.475338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.475455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.475480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.475604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.475725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.475750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.475860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.476049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.476076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.476224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.476367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.476392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.476533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.476653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.476678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.476831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.476952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.476978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.477088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.477203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.477229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.477359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.477506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.477531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.477652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.477765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.477790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.477909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.478038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.478063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.478197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.478333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.478358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.478490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.478628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.478653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.478765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.478885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.478912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.479030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.479171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.479196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.479315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.479464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.479490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.479609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.479754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.479779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.479901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.480023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.480048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.480170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.480308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.480332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.480438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.480550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.480574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.480688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.480864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.480895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.481032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.481171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.481196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.481340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.481452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.481477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.481622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.481739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.481764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.481890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.482032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.482057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.482192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.482300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.482325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.482447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.482597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.482622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.482739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.482882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.482908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.483026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.483148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.483173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.483313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.483450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.483476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.483591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.483735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.483760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.483903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.484046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.484071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.484183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.484323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.484348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.484508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.484648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.484674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.484832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.484988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.485014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.485137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.485257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.485282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.485400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.485515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.485540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.485687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.485810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.485835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.486000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.486122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.486147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.486261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.486387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.486412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.486566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.486678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.486703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.486824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.486940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.486966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.487118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.487233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.487258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.487400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.487546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.039 [2024-07-13 06:22:01.487570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.039 qpair failed and we were unable to recover it. 00:26:55.039 [2024-07-13 06:22:01.487693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.487861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.487893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.488035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.488150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.488175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.488315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.488456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.488481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.488606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.488724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.488754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.488888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.489009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.489035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.489158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.489279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.489305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.489448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.489591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.489616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.489768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.489920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.489946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.490175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.490318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.490343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.490464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.490607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.490632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.490754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.490862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.490893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.491005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.491149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.491174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.491288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.491410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.491435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.491580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.491717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.491742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.491966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.492109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.492134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.492281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.492391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.492416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.492542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.492679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.492704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.492848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.493074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.493100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.493216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.493356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.493382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.493505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.493639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.493664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.493838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.493950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.493976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.494171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.494292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.494317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.494443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.494561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.494586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.494725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.494841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.494872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.494996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.495106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.495130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.495274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.495379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.495403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.495519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.495704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.495729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.495856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.496008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.496033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.496144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.496281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.496306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.496427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.496557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.496582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.496696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.496807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.496832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.496955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.497098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.497124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.497341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.497456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.497481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.497595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.497770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.497795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.497928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.498039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.498065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.498258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.498383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.498407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.498582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.498704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.498729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.498843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.498962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.498987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.499111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.499263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.499288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.499455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.499579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.499604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.499750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.499893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.499918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.500040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.500165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.500191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.500374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.500486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.500511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.500654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.500771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.500795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.500921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.501041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.501067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.501177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.501289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.501314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.501435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.501580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.501605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.501728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.501879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.501906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.502058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.502182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.502207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.502360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.502473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.502498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.502650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.502771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.502797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.502921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.503049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.503075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.503218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.503388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.503413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.503561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.503675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.503700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.503844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.504004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.504030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.504141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.504329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.504354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.504507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.504621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.504646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.504789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.504930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.504955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.040 qpair failed and we were unable to recover it. 00:26:55.040 [2024-07-13 06:22:01.505081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.040 [2024-07-13 06:22:01.505197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.505222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.505349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.505472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.505496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.505638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.505761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.505785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.505908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.506018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.506043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.506162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.506289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.506314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.506485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.506597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.506622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.506731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.506900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.506925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.507044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.507159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.507184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.507305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.507448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.507474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.507613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.507731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.507756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.507902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.508043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.508068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.508189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.508302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.508327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.508435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.508560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.508584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.508738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.508904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.508930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.509062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.509175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.509199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.509344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.509462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.509487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.509603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.509715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.509740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.509889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.509997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.510022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.510137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.510260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.510285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.510392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.510505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.510530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.510701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.510815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.510841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.510962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.511080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.511107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.511258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.511394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.511419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.511596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.511736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.511761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.511908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.512132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.512157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.512269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.512422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.512447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.512574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.512685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.512710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.512822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.512937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.512962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.513093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.513216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.513241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.513390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.513533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.513563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.513675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.513795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.513819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.513939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.514064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.514090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.514234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.514382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.514407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.514517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.514635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.514660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.514806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.514941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.514978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.515118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.515250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.515286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.515429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.515577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.515603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.515721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.515877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.515903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.516023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.516142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.516167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.516293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.516449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.516483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.516608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.516776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.516808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.516946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.517071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.517097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.517242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.517360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.517385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.517530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.517684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.517709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.041 [2024-07-13 06:22:01.517885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.518029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.041 [2024-07-13 06:22:01.518057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.041 qpair failed and we were unable to recover it. 00:26:55.317 [2024-07-13 06:22:01.518190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.518339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.518365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.317 qpair failed and we were unable to recover it. 00:26:55.317 [2024-07-13 06:22:01.518494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.518614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.518639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.317 qpair failed and we were unable to recover it. 00:26:55.317 [2024-07-13 06:22:01.518772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.518899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.518929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.317 qpair failed and we were unable to recover it. 00:26:55.317 [2024-07-13 06:22:01.519110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.519226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.519251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.317 qpair failed and we were unable to recover it. 00:26:55.317 [2024-07-13 06:22:01.519382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.519518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.519554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.317 qpair failed and we were unable to recover it. 00:26:55.317 [2024-07-13 06:22:01.519694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.519810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.519836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.317 qpair failed and we were unable to recover it. 00:26:55.317 [2024-07-13 06:22:01.519994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.520117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.520144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.317 qpair failed and we were unable to recover it. 00:26:55.317 [2024-07-13 06:22:01.520305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.520451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.520484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.317 qpair failed and we were unable to recover it. 00:26:55.317 [2024-07-13 06:22:01.520616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.520736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.520763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.317 qpair failed and we were unable to recover it. 00:26:55.317 [2024-07-13 06:22:01.520880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.520994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.317 [2024-07-13 06:22:01.521020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.521163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.521276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.521302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.521419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.521541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.521567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.521689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.521832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.521858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.522011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.522135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.522161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.522294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.522410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.522436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.522581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.522727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.522753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.522885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.523000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.523026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.523139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.523259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.523285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.523401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.523514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.523540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.523652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.523778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.523804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.523936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.524112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.524139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.524259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.524371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.524398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.524513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.524636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.524663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.524801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.524929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.524955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.525100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.525224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.525250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.525374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.525485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.525511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.525682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.525794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.525820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.525962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.526076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.526103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.526253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.526376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.526402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.526524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.526643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.526669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.526782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.526901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.526928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.527068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.527238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.527265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.527416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.527586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.527613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.527739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.527855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.527889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.528009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.528126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.528152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.528273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.528420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.528446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.528591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.528713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.528740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.528887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.529026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.529052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.529223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.529398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.529424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.318 qpair failed and we were unable to recover it. 00:26:55.318 [2024-07-13 06:22:01.529534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.529687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.318 [2024-07-13 06:22:01.529713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.529827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.529983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.530009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.530123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.530265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.530292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.530405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.530556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.530582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.530700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.530816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.530852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.531012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.531147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.531174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.531291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.531463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.531489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.531630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.531741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.531767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.531909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.532025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.532051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.532172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.532324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.532351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.532468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.532582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.532609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.532728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.532863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.532923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.533045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.533173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.533200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.533350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.533467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.533494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.533639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.533778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.533804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.533942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.534064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.534091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.534220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.534388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.534415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.534541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.534683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.534709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.534827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.534990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.535017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.535162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.535293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.535319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.535459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.535602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.535628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.535763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.535907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.535934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.536047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.536159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.536187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.536309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.536453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.536480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.536617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.536737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.536763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.536937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.537052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.537079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.537192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.537334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.537360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.537481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.537611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.537637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.537753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.537879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.319 [2024-07-13 06:22:01.537907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.319 qpair failed and we were unable to recover it. 00:26:55.319 [2024-07-13 06:22:01.538058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.538179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.538206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.538319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.538466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.538492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.538605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.538716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.538742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.538881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.539030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.539056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.539172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.539324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.539350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.539496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.539646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.539672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.539787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.539908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.539936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.540069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.540182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.540208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.540349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.540494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.540520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.540646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.540790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.540817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.540979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.541117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.541144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.541275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.541417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.541443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.541593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.541734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.541760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.541901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.542047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.542073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.542196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.542313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.542341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.542492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.542637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.542663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.542775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.542903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.542930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.543050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.543177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.543203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.543347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.543463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.543489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.543658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.543767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.543793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.543967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.544084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.544111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.544264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.544377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.544404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.544516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.544654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.544680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.544827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.544947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.544974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.545117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.545228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.545254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.545394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.545507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.545533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.545648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.545790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.545817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.545940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.546100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.546126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.546249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.546387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.546414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.546561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.546712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.546738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.320 qpair failed and we were unable to recover it. 00:26:55.320 [2024-07-13 06:22:01.546853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.320 [2024-07-13 06:22:01.546986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.547014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.547128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.547300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.547326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.547433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.547577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.547604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.547757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.547879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.547907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.548031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.548205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.548231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.548346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.548493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.548519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.548664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.548784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.548811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.548949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.549067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.549094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.549218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.549362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.549389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.549513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.549662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.549689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.549836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.549964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.549992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.550115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.550257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.550283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.550429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.550551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.550577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.550691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.550828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.550854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.550977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.551108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.551134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.551277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.551415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.551441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.551585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.551702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.551728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.551879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.552003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.552030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.552172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.552348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.552373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.552544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.552659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.552686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.552832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.552955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.552983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.553126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.553256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.553283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.553423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.553532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.553558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.553710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.553855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.553887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.554011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.554124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.554150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.554293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.554460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.554486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.554627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.554770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.554796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.554923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.555075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.555102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.555217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.555355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.555381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.321 qpair failed and we were unable to recover it. 00:26:55.321 [2024-07-13 06:22:01.555528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.321 [2024-07-13 06:22:01.555642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.555669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.555819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.555963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.555990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.556100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.556215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.556242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.556383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.556499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.556525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.556696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.556808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.556835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.556984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.557107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.557134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.557249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.557388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.557414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.557524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.557637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.557665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.557805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.557923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.557955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.558108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.558215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.558242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.558414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.558551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.558577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.558720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.558826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.558852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.559005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.559162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.559188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.559298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.559425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.559451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.559565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.559729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.559755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.559882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.560001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.560027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.560139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.560283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.560310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.560454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.560582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.560608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.560746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.560856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.560894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.561038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.561159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.561185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.561341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.561510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.561537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.561688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.561802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.561828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.561959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.562076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.562103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.562226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.562366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.562392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.562531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.562673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.562699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.562841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.562974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.563001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.322 qpair failed and we were unable to recover it. 00:26:55.322 [2024-07-13 06:22:01.563121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.563260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.322 [2024-07-13 06:22:01.563286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.563403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.563511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.563538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.563650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.563769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.563800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.563926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.564069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.564094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.564263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.564410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.564436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.564567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.564689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.564715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.564835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.565010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.565037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.565183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.565303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.565329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.565454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.565571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.565597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.565755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.565898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.565925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.566033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.566180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.566207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.566333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.566462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.566488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.566613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.566757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.566788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.566933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.567087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.567113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.567255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.567376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.567404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.567551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.567677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.567703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.567811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.567930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.567956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.568102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.568239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.568265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.568391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.568508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.568534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.568684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.568826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.568852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.569001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.569146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.569172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.569278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.569424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.569450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.569598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.569715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.569741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.569876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.570018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.570044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.570186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.570303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.570330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.570475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.570619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.570645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.570798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.570920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.570948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.571096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.571217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.571243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.571380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.571522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.571548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.571690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.571810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.571836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.571960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.572131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.323 [2024-07-13 06:22:01.572158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.323 qpair failed and we were unable to recover it. 00:26:55.323 [2024-07-13 06:22:01.572271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.572387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.572413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.572524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.572673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.572698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.572879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.573006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.573032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.573147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.573286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.573312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.573487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.573597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.573624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.573773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.573931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.573958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.574115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.574248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.574275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.574441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.574558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.574584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.574740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.574913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.574940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.575087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.575205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.575231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.575353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.575503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.575529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.575679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.575832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.575858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.576027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.576171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.576197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.576309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.576458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.576485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.576615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.576742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.576770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.576891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.577035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.577061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.577222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.577340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.577366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.577536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.577653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.577679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.577794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.577912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.577939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.578064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.578171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.578198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.578345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.578490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.578516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.578641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.578750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.578777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.578960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.579107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.579133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.579278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.579394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.579421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.579565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.579677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.579703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.579840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.579973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.580000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.580121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.580229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.580255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.580399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.580546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.580571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.580714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.580850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.580883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.581026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.581147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.581172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.324 qpair failed and we were unable to recover it. 00:26:55.324 [2024-07-13 06:22:01.581316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.581436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.324 [2024-07-13 06:22:01.581461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.581583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.581708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.581733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.581857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.582007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.582032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.582180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.582296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.582322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.582467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.582576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.582601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.582742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.582855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.582888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.583012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.583151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.583177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.583313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.583431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.583458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.583578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.583695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.583722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.583879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.584004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.584030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.584146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.584269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.584296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.584438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.584562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.584588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.584717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.584857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.584890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.585030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.585143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.585169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.585318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.585487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.585513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.585655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.585799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.585825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.585949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.586086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.586111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.586249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.586366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.586392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.586523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.586631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.586656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.586809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.586981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.587007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.587124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.587254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.587281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.587426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.587567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.587593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.587746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.587887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.587913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.588031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.588152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.588178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.588322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.588444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.588470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.588610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.588724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.588750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.588876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.588990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.589016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.589138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.589253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.589279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.589423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.589562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.589588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.589743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.589853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.589884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.589998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.590131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.325 [2024-07-13 06:22:01.590157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.325 qpair failed and we were unable to recover it. 00:26:55.325 [2024-07-13 06:22:01.590309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.590420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.590446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.590557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.590677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.590702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.590824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.590940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.590966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.591074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.591183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.591208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.591330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.591476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.591501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.591638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.591778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.591804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.591914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.592026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.592053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.592168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.592276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.592302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.592412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.592522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.592547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.592702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.592810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.592836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.592984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.593134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.593160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.593303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.593424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.593451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.593567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.593678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.593704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.593827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.593943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.593970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.594120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.594263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.594288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.594426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.594546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.594573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.594682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.594791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.594818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.594929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.595098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.595123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.595239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.595348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.595373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.595515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.595658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.595682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.595824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.595954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.595980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.596152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.596278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.596301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.596410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.596544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.596568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.596721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.596863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.596894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.597064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.597172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.597196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.597311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.597428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.597453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.326 qpair failed and we were unable to recover it. 00:26:55.326 [2024-07-13 06:22:01.597590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.597726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.326 [2024-07-13 06:22:01.597751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.597879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.598021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.598047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.598171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.598336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.598361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.598503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.598642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.598667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.598787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.598906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.598931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.599100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.599245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.599269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.599386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.599523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.599547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.599666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.599773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.599798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.599907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.600045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.600070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.600211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.600323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.600347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.600470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.600609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.600635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.600749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.600893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.600918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.601039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.601152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.601178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.601322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.601490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.601515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.601662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.601772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.601797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.601937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.602087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.602112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.602235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.602349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.602375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.602481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.602619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.602643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.602754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.602923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.602949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.603097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.603235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.603258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.603402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.603511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.603536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.603647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.603771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.603796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.603955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.604070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.604096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.604207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.604348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.604374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.604492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.604637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.604663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.604819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.604932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.604965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.605108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.605232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.605257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.605420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.605538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.605564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.605681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.605791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.605814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.605958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.606105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.606130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.606248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.606384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.606407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.327 qpair failed and we were unable to recover it. 00:26:55.327 [2024-07-13 06:22:01.606534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.327 [2024-07-13 06:22:01.606683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.606708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.606814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.606933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.606958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.607075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.607247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.607271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.607413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.607524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.607548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.607703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.607840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.607875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.608047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.608166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.608192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.608302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.608449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.608475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.608622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.608734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.608760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.608883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.609004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.609031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.609176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.609289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.609315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.609441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.609616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.609643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.609769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.609914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.609938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.610049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.610196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.610219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.610355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.610499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.610523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.610639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.610747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.610774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.610890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.611010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.611035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.611154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.611261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.611284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.611424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.611545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.611569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.611681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.611811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.611848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.611976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.612115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.612141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.612289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.612431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.612458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.612579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.612693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.612719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.612877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.613002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.613028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.613152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.613269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.613295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.613418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.613536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.613566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.613716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.613851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.613884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.614012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.614133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.614157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.614305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.614423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.614448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.614588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.614731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.614758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.614897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.615017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.615042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.328 [2024-07-13 06:22:01.615185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.615355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.328 [2024-07-13 06:22:01.615381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.328 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.615500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.615619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.615647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.615764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.615883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.615910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.616019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.616165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.616192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.616347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.616461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.616487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.616617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.616732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.616759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.616906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.617059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.617085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.617198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.617318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.617344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.617482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.617591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.617616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.617745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.617859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.617888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.618058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.618226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.618252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.618385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.618502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.618527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.618642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.618761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.618785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.618902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.619021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.619047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.619190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.619328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.619353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.619471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.619617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.619643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.619772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.619896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.619923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.620061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.620212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.620238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.620380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.620504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.620530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.620676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.620790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.620816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.620935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.621084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.621111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.621251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.621370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.621396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.621532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.621643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.621667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.621801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.621913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.621938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.622057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.622191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.622215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.622368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.622498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.622523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.622633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.622776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.622800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.622942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.623060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.623093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.623240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.623379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.623405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.623528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.623646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.623672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.623815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.623963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.623989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.329 qpair failed and we were unable to recover it. 00:26:55.329 [2024-07-13 06:22:01.624098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.329 [2024-07-13 06:22:01.624219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.624245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.624360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.624471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.624496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.624635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.624743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.624768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.624883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.625033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.625058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.625177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.625297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.625322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.625434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.625547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.625571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.625704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.625839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.625878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.625999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.626116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.626139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.626254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.626364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.626387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.626535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.626679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.626705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.626824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.626951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.626975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.627082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.627219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.627244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.627385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.627534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.627559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.627680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.627795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.627822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.627975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.628122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.628149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.628268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.628404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.628429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.628541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.628659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.628683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.628804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.628977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.629004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.629150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.629276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.629303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.629476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.629613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.629639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.629807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.629923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.629948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.630087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.630202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.630227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.630341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.630493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.630518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.630644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.630764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.630790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.630935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.631053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.631078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.631202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.631344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.631370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.330 [2024-07-13 06:22:01.631497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.631625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.330 [2024-07-13 06:22:01.631651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.330 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.631785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.631911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.631938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.632067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.632217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.632243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.632362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.632490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.632515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.632628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.632749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.632774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.632890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.632998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.633024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.633165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.633311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.633336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.633458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.633567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.633592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.633707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.633825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.633851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.633975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.634117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.634143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.634266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.634413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.634439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.634560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.634678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.634703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.634823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.634977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.635003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.635152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.635277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.635303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.635419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.635542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.635567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.635691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.635830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.635856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.635980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.636125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.636151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.636303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.636444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.636469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.636591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.636711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.636736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.636890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.637006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.637032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.637156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.637265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.637291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.637439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.637556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.637582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.637693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.637805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.637831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.637981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.638121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.638146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.638267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.638384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.638410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.638521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.638636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.638661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.638779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.638897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.638924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.639076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.639215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.639240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.639354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.639505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.639531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.639681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.639788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.639814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.639954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.640073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.640099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.331 qpair failed and we were unable to recover it. 00:26:55.331 [2024-07-13 06:22:01.640222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.331 [2024-07-13 06:22:01.640363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.640389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.640510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.640632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.640658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.640764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.640910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.640937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.641085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.641194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.641219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.641337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.641464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.641489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.641639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.641755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.641780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.641918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.642041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.642066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.642178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.642351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.642376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.642524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.642670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.642697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.642810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.642924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.642951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.643094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.643213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.643239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.643357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.643473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.643501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.643676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.643826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.643851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.643988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.644134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.644160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.644276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.644401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.644429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.644569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.644708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.644734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.644856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.644973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.644998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.645137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.645248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.645274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.645391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.645531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.645557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.645728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.645846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.645878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.646004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.646150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.646175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.646296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.646432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.646458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.646575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.646726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.646752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.646893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.647047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.647073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.647224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.647371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.647397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.647518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.647632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.647659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.647785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.647927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.647954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.648092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.648240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.648265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.648383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.648535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.648561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.648680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.648803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.648828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.332 qpair failed and we were unable to recover it. 00:26:55.332 [2024-07-13 06:22:01.648951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.332 [2024-07-13 06:22:01.649060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.649085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.649240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.649359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.649384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.649509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.649668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.649694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.649833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.649952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.649979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.650123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.650270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.650296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.650420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.650563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.650588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.650729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.650877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.650903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.651024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.651168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.651199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.651345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.651483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.651509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.651660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.651804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.651830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.651959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.652074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.652101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.652229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.652344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.652370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.652507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.652646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.652671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.652784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.652920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.652947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.653093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.653202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.653228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.653352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.653474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.653501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.653621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.653764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.653790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.653943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.654110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.654139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.654260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.654377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.654404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.654515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.654655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.654680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.654819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.654934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.654960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.655104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.655269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.655296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.655418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.655557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.655583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.655696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.655813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.655839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.655975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.656096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.656121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.656245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.656365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.656390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.656502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.656611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.656636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.656785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.656898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.656928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.657045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.657189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.657214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.657326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.657431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.657456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.657612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.657730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.333 [2024-07-13 06:22:01.657756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.333 qpair failed and we were unable to recover it. 00:26:55.333 [2024-07-13 06:22:01.657890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.658013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.658039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.658147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.658290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.658315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.658433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.658549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.658575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.658729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.658874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.658901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.659016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.659135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.659161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.659274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.659414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.659441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.659580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.659696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.659723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.659844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.659998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.660025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.660146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.660318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.660343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.660491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.660602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.660627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.660781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.660903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.660929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.661068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.661211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.661238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.661362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.661478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.661504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.661623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.661770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.661797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.661908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.662030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.662055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.662186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.662358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.662384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.662497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.662616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.662642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.662760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.662897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.662924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.663043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.663163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.663188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.663309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.663483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.663508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.663624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.663731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.663757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.663883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.664030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.664057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.664175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.664281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.664307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.664440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.664556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.664582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.664702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.664852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.664886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.665036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.665189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.665214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.665334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.665472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.665498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.334 qpair failed and we were unable to recover it. 00:26:55.334 [2024-07-13 06:22:01.665637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.665756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.334 [2024-07-13 06:22:01.665782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.665909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.666054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.666080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.666198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.666310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.666335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.666478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.666613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.666639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.666748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.666874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.666900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.667053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.667163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.667189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.667338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.667450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.667475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.667593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.667751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.667776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.667914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.668027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.668053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.668197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.668341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.668366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.668491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.668627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.668653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.668805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.668931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.668958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.669110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.669247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.669274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.669396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.669517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.669544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.669692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.669803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.669829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.669960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.670106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.670133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.670264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.670375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.670401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.670541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.670663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.670689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.670828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.670965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.670993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.671147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.671256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.671282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.671429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.671538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.671564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.671677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.671819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.671845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.671973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.672087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.672115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.672232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.672371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.672397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.672511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.672635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.672661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.672783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.672909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.672936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.335 qpair failed and we were unable to recover it. 00:26:55.335 [2024-07-13 06:22:01.673089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.673214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.335 [2024-07-13 06:22:01.673240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.673355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.673471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.673498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.673623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.673772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.673798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.673911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.674071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.674097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.674214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.674325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.674351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.674495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.674664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.674690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.674811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.674931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.674957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.675074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.675186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.675212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.675323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.675463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.675489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.675629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.675773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.675799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.675929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.676054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.676081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.676221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.676367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.676394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.676534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.676668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.676694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.676808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.676954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.676982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.677114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.677221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.677247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.677406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.677530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.677556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.677710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.677832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.677858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.678011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.678136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.678161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.678281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.678429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.678455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.678613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.678755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.678780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.678924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.679053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.679080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.679250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.679403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.679429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.679548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.679717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.679743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.679871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.680031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.680057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.680205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.680343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.680368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.680486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.680608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.680633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.680763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.680886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.680913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.681092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.681251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.681277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.681414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.681580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.681606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.681726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.681844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.336 [2024-07-13 06:22:01.681874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.336 qpair failed and we were unable to recover it. 00:26:55.336 [2024-07-13 06:22:01.681986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.682144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.682169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.682288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.682398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.682424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.682575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.682706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.682733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.682902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.683050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.683076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.683231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.683360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.683386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.683523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.683666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.683693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.683835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.683955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.683982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.684102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.684249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.684275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.684397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.684516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.684542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.684684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.684798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.684824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.684943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.685086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.685112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.685256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.685409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.685435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.685584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.685704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.685730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.685841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.685958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.685985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.686131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.686308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.686334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.686451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.686565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.686591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.686713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.686852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.686884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.687007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.687159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.687185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.687300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.687447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.687474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.687699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.687822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.687848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.687985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.688132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.688158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.688272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.688441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.688467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.688577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.688695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.688721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.688862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.689031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.689058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.689174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.689316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.689342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.689463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.689585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.689612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.689756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.689912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.689939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.690081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.690254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.690280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.690400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.690513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.337 [2024-07-13 06:22:01.690539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.337 qpair failed and we were unable to recover it. 00:26:55.337 [2024-07-13 06:22:01.690661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.690774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.690802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.690936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.691047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.691073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.691214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.691334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.691360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.691503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.691644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.691670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.691806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.691925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.691952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.692071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.692203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.692229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.692373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.692543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.692569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.692696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.692813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.692839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.692955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.693070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.693096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.693269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.693412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.693438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.693564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.693681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.693707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.693824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.693953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.693980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.694119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.694341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.694368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.694510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.694646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.694672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.694784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.694904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.694931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.695086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.695223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.695253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.695373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.695490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.695516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.695659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.695803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.695829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.695956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.696097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.696123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.696277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.696421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.696447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.696568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.696716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.696741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.696888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.696996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.697022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.697161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.697300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.697326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.697438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.697576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.697602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.697749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.697919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.697946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.698092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.698237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.698267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.698408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.698553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.698579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.698750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.698898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.698924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.699098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.699253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.338 [2024-07-13 06:22:01.699279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.338 qpair failed and we were unable to recover it. 00:26:55.338 [2024-07-13 06:22:01.699403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.699556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.699583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.699695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.699841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.699873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.700017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.700130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.700156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.700275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.700391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.700417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.700542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.700715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.700741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.700852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.701003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.701030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.701140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.701254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.701285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.701453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.701595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.701622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.701740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.701889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.701916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.702029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.702167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.702193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.702316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.702423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.702450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.702602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.702747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.702773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.702897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.703012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.703038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.703197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.703372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.703398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.703537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.703710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.703736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.703853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.704012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.704038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.704188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.704356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.704386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.704502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.704626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.704653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.704827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.704998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.705025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.705196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.705318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.705344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.705463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.705591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.705617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.705841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.706022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.706049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.706178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.706310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.706337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.706503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.706622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.706648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.339 [2024-07-13 06:22:01.706777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.706933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.339 [2024-07-13 06:22:01.706961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.339 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.707104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.707220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.707246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.707365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.707484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.707510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.707649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.707768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.707794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.707907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.708027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.708054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.708169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.708321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.708348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.708474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.708591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.708617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.708736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.708849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.708882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.708994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.709133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.709159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.709278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.709421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.709447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.709569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.709799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.709826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.709976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.710112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.710138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.710313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.710431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.710459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.710570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.710817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.710843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.710961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.711078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.711105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.711245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.711356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.711383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.711510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.711642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.711668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.711813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.711943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.711970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.712081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.712220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.712246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.712366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.712534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.712560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.712668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.712808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.712835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.712958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.713071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.713097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.713209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.713347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.713373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.713602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.713736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.713765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.713989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.714140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.714167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.714300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.714445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.714471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.714595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.714830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.714856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.714972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.715153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.715179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.715315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.715432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.340 [2024-07-13 06:22:01.715459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.340 qpair failed and we were unable to recover it. 00:26:55.340 [2024-07-13 06:22:01.715601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.715831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.715857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.715979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.716107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.716135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.716358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.716480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.716517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.716682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.716831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.716857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.716984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.717150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.717176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.717321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.717453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.717479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.717621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.717776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.717802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.717944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.718088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.718115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.718251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.718397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.718423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.718535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.718649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.718676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.718789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.718901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.718927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.719041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.719161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.719187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.719335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.719486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.719513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.719623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.719761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.719786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.719936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.720050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.720077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.720204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.720351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.720376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.720494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.720610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.720635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.720760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.720895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.720922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.721064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.721234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.721261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.721406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.721531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.721557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.721666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.721817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.721843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.721995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.722102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.722129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.722245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.722357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.722384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.722533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.722643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.722669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.722846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.722968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.722995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.723147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.723287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.723313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.723482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.723622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.723648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.723790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.723929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.723957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.341 [2024-07-13 06:22:01.724070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.724208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.341 [2024-07-13 06:22:01.724234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.341 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.724347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.724484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.724510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.724661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.724814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.724840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.724998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.725115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.725142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.725290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.725463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.725489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.725610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.725762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.725789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.725908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.726029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.726055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.726175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.726316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.726343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.726462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.726603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.726629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.726772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.726924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.726950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.727096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.727238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.727265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.727374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.727482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.727508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.727659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.727768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.727794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.727914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.728035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.728061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.728176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.728318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.728344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.728468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.728612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.728639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.728781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.728901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.728928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.729081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.729230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.729257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.729385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.729496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.729522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.729639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.729805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.729831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.729966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.730091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.730118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.730262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.730372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.730398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.730522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.730661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.730687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.730802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.730937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.730964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.731113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.731222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.731247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.731407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.731575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.731601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.731716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.731860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.731893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.732024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.732175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.732201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.732342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.732490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.732517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.732638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.732750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.732776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.342 qpair failed and we were unable to recover it. 00:26:55.342 [2024-07-13 06:22:01.732906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.342 [2024-07-13 06:22:01.733031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.733058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.733197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.733310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.733337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.733477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.733628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.733654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.733798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.733934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.733962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.734079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.734195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.734222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.734363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.734482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.734508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.734618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.734759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.734786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.734929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.735071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.735097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.735323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.735454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.735480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.735589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.735718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.735745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.735888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.736026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.736053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.736195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.736309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.736335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.736446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.736592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.736619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.736762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.736905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.736932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.737076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.737214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.737240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.737463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.737687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.737713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.737819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.737959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.737998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.738151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.738375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.738401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.738554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.738712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.738741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.738874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.739040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.739067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.739174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.739397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.739424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.739557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.739674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.739711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.739874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.740016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.740042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.740161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.740309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.740335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.740445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.740615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.740641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.740774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.740885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.740912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.741054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.741199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.741226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.741376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.741529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.741555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.741722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.741861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.741899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.343 qpair failed and we were unable to recover it. 00:26:55.343 [2024-07-13 06:22:01.742025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.742193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.343 [2024-07-13 06:22:01.742219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.742336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.742507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.742534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.742677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.742903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.742930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.743069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.743185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.743212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.743323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.743444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.743475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.743599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.743768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.743794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.743937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.744084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.744111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.744250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.744364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.744395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.744506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.744621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.744648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.744819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.744941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.744968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.745094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.745212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.745238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.745346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.745497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.745523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.745670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.745892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.745919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.746035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.746259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.746285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.746426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.746535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.746562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.746785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.746973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.747000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.747115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.747235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.747261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.747387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.747526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.747557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.747701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.747844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.747876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.747996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.748122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.748148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.748267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.748404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.748430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.748545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.748663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.748690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.748797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.749022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.749049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.749176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.749318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.749344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.749456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.749580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.749605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.344 [2024-07-13 06:22:01.749755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.749978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.344 [2024-07-13 06:22:01.750005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.344 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.750177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.750290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.750316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.750440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.750554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.750585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.750809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.750918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.750945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.751066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.751225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.751251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.751363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.751502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.751528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.751679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.751797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.751825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.751979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.752094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.752120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.752261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.752386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.752412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.752527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.752651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.752678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.752829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.752975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.753002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.753116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.753291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.753318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.753465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.753600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.753631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.753753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.753928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.753955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.754069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.754216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.754243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.754382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.754522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.754548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.754688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.754858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.754890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.755036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.755151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.755177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.755349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.755467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.755493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.755635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.755740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.755766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.755882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.755999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.756026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.756148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.756281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.756307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.756431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.756583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.756609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.756729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.756840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.756871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.757024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.757179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.757205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.757317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.757440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.757466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.757623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.757741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.757767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.757887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.758038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.758064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.758212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.758331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.758357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.345 [2024-07-13 06:22:01.758506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.758632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.345 [2024-07-13 06:22:01.758658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.345 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.758801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.758946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.758972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.759098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.759219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.759245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.759363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.759484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.759511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.759621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.759743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.759769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.759941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.760056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.760084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.760232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.760367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.760393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.760531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.760684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.760710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.760848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.761025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.761052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.761163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.761277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.761303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.761527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.761667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.761693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.761830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.761972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.762000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.762149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.762269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.762295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.762415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.762561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.762588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.762715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.762854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.762887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.763060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.763185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.763211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.763327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.763475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.763502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.763650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.763804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.763830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.763957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.764078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.764104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.764251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.764364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.764391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.764531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.764647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.764673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.764840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.764992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.765019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.765156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.765270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.765296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.765418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.765558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.765585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.765699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.765847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.765886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.766032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.766186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.766212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.766351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.766469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.766495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.766611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.766750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.766776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.346 [2024-07-13 06:22:01.766898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.767073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.346 [2024-07-13 06:22:01.767099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.346 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.767215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.767339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.767366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.767482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.767622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.767649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.767791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.767927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.767954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.768112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.768336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.768363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.768514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.768628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.768653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.768820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.768968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.768995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.769142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.769258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.769285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.769399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.769546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.769572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.769722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.769830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.769857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.770007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.770140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.770166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.770281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.770430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.770456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.770576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.770715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.770741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.770860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.771009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.771035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.771185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.771307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.771333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.771458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.771608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.771634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.771792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.771944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.771971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.772086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.772214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.772240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.772384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.772504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.772530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.772682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.772794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.772820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.772950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.773172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.773198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.773351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.773500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.773526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.773670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.773815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.773842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.774020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.774129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.774155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.774292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.774397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.774423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.774534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.774685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.774712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.774839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.774967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.774994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.775123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.775273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.775299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.775444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.775562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.775588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.775737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.775870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.347 [2024-07-13 06:22:01.775898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.347 qpair failed and we were unable to recover it. 00:26:55.347 [2024-07-13 06:22:01.776070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.776188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.776214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.776367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.776489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.776515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.776626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.776747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.776773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.776894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.777015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.777042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.777155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.777292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.777318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.777493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.777717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.777743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.777918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.778032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.778058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.778182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.778323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.778349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.778462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.778682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.778708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.778879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.779024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.779052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.779169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.779316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.779342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.779450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.779569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.779595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.779700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.779877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.779904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.780080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.780207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.780234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.780374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.780515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.780541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.780688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.780795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.780821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.780978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.781128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.781154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.781300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.781410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.781436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.781559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.781725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.781751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.781903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.782015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.782041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.782152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.782302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.782328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.782476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.782615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.782640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.782772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.782898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.782924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.348 qpair failed and we were unable to recover it. 00:26:55.348 [2024-07-13 06:22:01.783031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.783161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.348 [2024-07-13 06:22:01.783188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.783299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.783446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.783473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.783593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.783761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.783788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.783915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.784037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.784063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.784207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.784329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.784355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.784500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.784653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.784679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.784792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.784914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.784940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.785062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.785208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.785234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.785345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.785494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.785520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.785647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.785808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.785835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.786008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.786123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.786149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.786279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.786428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.786454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.786574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.786724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.786750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.786911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.787056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.787083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.787196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.787338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.787364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.787507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.787656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.787686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.787803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.787921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.787948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.788074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.788182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.788208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.788353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.788467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.788493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.788647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.788771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.788797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.788941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.789071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.789097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.789221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.789341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.789367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.789516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.789640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.789666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.789813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.789931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.789963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.790085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.790198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.790224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.790337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.790450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.790477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.790596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.790737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.790763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.790879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.791000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.791027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.791173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.791342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.791368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.349 qpair failed and we were unable to recover it. 00:26:55.349 [2024-07-13 06:22:01.791591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.791734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.349 [2024-07-13 06:22:01.791761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.791913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.792064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.792091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.792244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.792386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.792412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.792535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.792674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.792701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.792818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.792937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.792969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.793113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.793227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.793253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.793398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.793546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.793573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.793703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.793858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.793898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.794024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.794171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.794197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.794313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.794491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.794517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.794660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.794765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.794791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.794933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.795065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.795091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.795206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.795315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.795341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.795460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.795600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.795626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.795770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.795911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.795942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.796061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.796199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.796225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.796347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.796496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.796522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.796668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.796791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.796818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.796933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.797058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.797085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.797227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.797397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.797423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.797546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.797664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.797692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.797834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.797973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.798001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.798116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.798234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.798260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.798403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.798532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.798558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.798700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.798840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.798878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.798997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.799148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.799174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.799306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.799454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.799481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.799627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.799760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.799785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.799903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.800020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.800046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.350 qpair failed and we were unable to recover it. 00:26:55.350 [2024-07-13 06:22:01.800169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.350 [2024-07-13 06:22:01.800307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.800332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.800481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.800620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.800646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.800771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.800918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.800945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.801106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.801213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.801239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.801360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.801508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.801535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.801665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.801788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.801814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.801965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.802084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.802110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.802251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.802389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.802415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.802585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.802725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.802752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.802868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.802991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.803017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.803161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.803297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.803323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.803464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.803590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.803617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.803740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.803884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.803910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.804028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.804173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.804199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.804351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.804520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.804546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.804705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.804830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.804856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.805041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.805169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.805195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.805308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.805433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.805461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.805601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.805715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.805741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.805846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.806027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.806055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.806180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.806332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.806359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.806481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.806604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.806630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.806778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.806925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.806953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.807079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.807224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.807251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.807395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.807538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.807564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.807718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.807831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.807876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.808059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.808211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.808239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.808400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.808554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.808581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.808694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.808805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.351 [2024-07-13 06:22:01.808832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.351 qpair failed and we were unable to recover it. 00:26:55.351 [2024-07-13 06:22:01.809011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.352 [2024-07-13 06:22:01.809136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.352 [2024-07-13 06:22:01.809162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.352 qpair failed and we were unable to recover it. 00:26:55.352 [2024-07-13 06:22:01.809281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.352 [2024-07-13 06:22:01.809402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.352 [2024-07-13 06:22:01.809430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.352 qpair failed and we were unable to recover it. 00:26:55.352 [2024-07-13 06:22:01.809543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.352 [2024-07-13 06:22:01.809659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.352 [2024-07-13 06:22:01.809697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.352 qpair failed and we were unable to recover it. 00:26:55.352 [2024-07-13 06:22:01.809863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.809997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.810024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.810136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.810273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.810300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.810407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.810523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.810550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.810678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.810828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.810855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.810996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.811122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.811149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.811323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.811441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.811469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.811618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.811745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.811771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.811892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.811998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.812025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.812177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.812302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.812329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.812487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.812641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.812667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.812812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.812960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.812987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.813100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.813243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.813270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.813444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.813617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.813643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.813788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.813908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.813935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.814090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.814211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.814237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.814363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.814539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.814565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.814737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.814857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.814889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.815000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.815127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.815153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.815267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.815392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.815419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.815564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.815710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.815736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.629 qpair failed and we were unable to recover it. 00:26:55.629 [2024-07-13 06:22:01.815850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.629 [2024-07-13 06:22:01.815965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.815992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.816120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.816264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.816291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.816424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.816567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.816593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.816721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.816831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.816857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.817028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.817151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.817178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.817322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.817446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.817472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.817590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.817729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.817756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.817896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.818013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.818039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.818197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.818317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.818344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.818454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.818599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.818625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.818738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.818911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.818938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.819052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.819189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.819215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.819390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.819497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.819523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.819661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.819830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.819856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.819983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.820134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.820161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.820276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.820387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.820414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.820528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.820681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.820708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.820819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.820941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.820968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.821092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.821235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.821262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.821409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.821522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.821549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.821662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.821781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.821809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.821956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.822076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.822103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.822252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.822425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.822451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.822561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.822707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.822734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.822879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.823025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.823051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.823172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.823345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.823371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.823484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.823623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.823650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.823789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.823909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.823936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.630 [2024-07-13 06:22:01.824051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.824172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.630 [2024-07-13 06:22:01.824198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.630 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.824344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.824458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.824484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.824658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.824769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.824795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.824937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.825077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.825103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.825229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.825347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.825374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.825494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.825640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.825667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.825785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.825970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.825997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.826119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.826235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.826261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.826370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.826480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.826507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.826630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.826755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.826783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.826900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.827066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.827092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.827216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.827360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.827387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.827529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.827642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.827669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.827810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.827948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.827976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.828119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.828290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.828317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.828457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.828594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.828621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.828730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.828884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.828911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.829030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.829200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.829226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.829353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.829470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.829497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.829617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.829763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.829789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.829931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.830076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.830102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.830247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.830364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.830391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.830507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.830631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.830658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.830807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.830956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.830982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.831122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.831234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.831261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.831395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.831536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.831564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.831679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.831823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.831850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.832007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.832125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.832151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.832297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.832440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.832466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.631 qpair failed and we were unable to recover it. 00:26:55.631 [2024-07-13 06:22:01.832602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.631 [2024-07-13 06:22:01.832716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.832743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.832882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.833026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.833052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.833206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.833323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.833349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.833494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.833665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.833691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.833812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.833961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.833988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.834112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.834258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.834285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.834412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.834564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.834591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.834704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.834859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.834891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.835033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.835157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.835183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.835328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.835440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.835466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.835575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.835693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.835719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.835828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.835983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.836010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.836128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.836278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.836305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.836423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.836537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.836563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.836678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.836830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.836857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.837007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.837147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.837173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.837312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.837427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.837453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.837593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.837735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.837765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.837896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.838046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.838073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.838182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.838354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.838380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.838519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.838645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.838672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.838818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.838959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.838986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.839106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.839256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.839283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.839411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.839549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.839575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.839717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.839825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.839851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.839972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.840085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.840112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.840283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.840391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.840417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.840531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.840641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.840672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.840818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.840944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.840971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.632 qpair failed and we were unable to recover it. 00:26:55.632 [2024-07-13 06:22:01.841088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.841245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.632 [2024-07-13 06:22:01.841271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.841394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.841564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.841590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.841730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.841841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.841873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.841982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.842095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.842123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.842266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.842406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.842433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.842545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.842687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.842714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.842821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.842968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.842995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.843113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.843235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.843261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.843376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.843527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.843558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.843702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.843825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.843851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.844036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.844162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.844190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.844323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.844466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.844492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.844634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.844745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.844771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.844920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.845067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.845094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.845215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.845381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.845408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.845534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.845683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.845710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.845827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.845967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.845994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.846107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.846218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.846245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.846390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.846546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.846576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.846702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.846844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.846875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.846986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.847134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.847160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.847278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.847420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.847447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.847587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.847708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.847734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.633 qpair failed and we were unable to recover it. 00:26:55.633 [2024-07-13 06:22:01.847844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.633 [2024-07-13 06:22:01.847959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.847986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.848095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.848243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.848269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.848443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.848568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.848594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.848740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.848909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.848936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.849049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.849170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.849196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.849303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.849450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.849476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.849619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.849747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.849774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.849937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.850052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.850078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.850192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.850335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.850361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.850510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.850637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.850663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.850803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.850928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.850954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.851105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.851218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.851244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.851365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.851520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.851546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.851717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.851837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.851863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.852025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.852169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.852195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.852308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.852415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.852441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.852595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.852751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.852777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.852897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.853017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.853044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.853161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.853272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.853300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.853454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.853574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.853600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.853723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.853883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.853910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.854034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.854145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.854171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.854285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.854427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.854454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.854565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.854716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.854743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.854860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.854976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.855002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.855141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.855272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.855299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.855411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.855590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.855616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.855729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.855879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.855905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.856049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.856160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.856186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.634 qpair failed and we were unable to recover it. 00:26:55.634 [2024-07-13 06:22:01.856359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.634 [2024-07-13 06:22:01.856486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.856512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.856634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.856806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.856832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.856965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.857132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.857158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.857271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.857395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.857423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.857532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.857644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.857671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.857814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.857928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.857955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.858101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.858216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.858242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.858362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.858502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.858528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.858673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.858812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.858838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.859001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.859142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.859169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.859287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.859408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.859434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.859581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.859746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.859773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.859885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.859996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.860023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.860165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.860279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.860305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.860424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.860533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.860560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.860676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.860812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.860838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.860963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.861107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.861133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.861248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.861366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.861392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.861517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.861629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.861655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.861763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.861886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.861913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.862064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.862208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.862235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.862379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.862534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.862560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.862672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.862800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.862827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.862951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.863079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.863105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.863224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.863365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.863392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.863533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.863652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.863678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.863800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.863949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.863976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.864110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.864223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.864249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.635 qpair failed and we were unable to recover it. 00:26:55.635 [2024-07-13 06:22:01.864397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.635 [2024-07-13 06:22:01.864527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.864564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.864720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.864893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.864920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.865031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.865159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.865185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.865297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.865410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.865437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.865559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.865682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.865708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.865897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.866075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.866102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.866245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.866392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.866418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.866567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.866679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.866705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.866878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.867003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.867029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.867152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.867297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.867323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.867473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.867610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.867636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.867757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.867907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.867934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.868075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.868198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.868225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.868368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.868517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.868545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.868698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.868839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.868870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.868993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.869138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.869165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.869283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.869406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.869433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.869572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.869695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.869721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.869844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.869981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.870009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.870156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.870305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.870331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.870458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.870568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.870594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.870740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.870893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.870920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.871047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.871164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.871192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.871334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.871475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.871501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.871637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.871758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.871784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.871907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.872046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.872072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.872216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.872362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.872388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.872512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.872661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.872686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.636 [2024-07-13 06:22:01.872810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.872980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.636 [2024-07-13 06:22:01.873007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.636 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.873151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.873267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.873293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.873405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.873578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.873605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.873746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.873869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.873896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.874047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.874154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.874180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.874299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.874445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.874472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.874593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.874759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.874786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.874900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.875046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.875073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.875196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.875308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.875334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.875481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.875597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.875624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.875774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.875890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.875917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.876035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.876189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.876215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.876384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.876524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.876550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.876664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.876802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.876828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.876981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.877106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.877132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.877251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.877391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.877417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.877534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.877680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.877707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.877815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.877939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.877966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.878135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.878254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.878281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.878402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.878517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.878543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.878680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.878816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.878843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.878974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.879090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.879115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.879236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.879381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.879407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.879550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.879688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.879714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.879827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.879947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.879973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.880116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.880235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.880261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.880372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.880512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.880539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.880645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.880757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.880783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.637 qpair failed and we were unable to recover it. 00:26:55.637 [2024-07-13 06:22:01.880925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.637 [2024-07-13 06:22:01.881051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.881077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.881189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.881326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.881352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.881464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.881573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.881600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.881720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.881875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.881901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.882045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.882202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.882228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.882351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.882457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.882483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.882635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.882765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.882790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.882939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.883082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.883108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.883227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.883353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.883379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.883522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.883648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.883674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.883813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.883966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.883998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.884112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.884261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.884288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.884400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.884541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.884568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.884682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.884795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.884826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.884971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.885123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.885149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.885265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.885406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.885432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.885546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.885688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.885714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.885858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.886020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.886047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.886164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.886296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.886323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.886446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.886557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.886583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.886726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.886835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.886861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.886990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.887135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.887161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.887279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.887419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.887445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.887560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.887677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.887708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.638 qpair failed and we were unable to recover it. 00:26:55.638 [2024-07-13 06:22:01.887851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.638 [2024-07-13 06:22:01.887974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.888001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.888140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.888279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.888305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.888425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.888578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.888604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.888735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.888848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.888880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.888994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.889112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.889137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.889291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.889407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.889433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.889557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.889699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.889725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.889843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.889970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.889996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.890166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.890307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.890333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.890475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.890595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.890625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.890768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.890900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.890928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.891060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.891179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.891206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.891348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.891462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.891488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.891600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.891736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.891762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.891878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.892035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.892062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.892170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.892314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.892340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.892484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.892628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.892654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.892762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.892933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.892960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.893075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.893218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.893244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.893351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.893463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.893493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.893636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.893777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.893803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.893956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.894092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.894118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.894228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.894371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.894397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.894511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.894622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.894648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.894793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.894937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.894964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.895086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.895199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.895230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.895403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.895532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.895559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.639 [2024-07-13 06:22:01.895681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.895828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.639 [2024-07-13 06:22:01.895856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.639 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.895982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.896092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.896118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.896230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.896402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.896428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.896574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.896686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.896712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.896827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.896961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.896987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.897106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.897231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.897259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.897429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.897543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.897569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.897689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.897840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.897894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.898015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.898155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.898182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.898299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.898467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.898492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.898631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.898739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.898765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.898937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.899048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.899074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.899219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.899362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.899389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.899516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.899640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.899666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.899836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.899958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.899984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.900129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.900272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.900298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.900412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.900582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.900608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.900726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.900837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.900863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.900992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.901144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.901170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.901317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.901426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.901451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.901573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.901693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.901719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.901828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.901953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.901980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.902102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.902242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.902268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.902382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.902500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.902527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.902659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.902779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.902806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.902966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.903086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.903114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.903223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.903345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.903372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.903487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.903637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.903664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.903815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.903945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.903973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.640 qpair failed and we were unable to recover it. 00:26:55.640 [2024-07-13 06:22:01.904091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.640 [2024-07-13 06:22:01.904235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.904260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.904383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.904558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.904584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.904699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.904845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.904877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.904988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.905106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.905133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.905278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.905419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.905445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.905559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.905671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.905697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.905819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.905969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.905995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.906118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.906239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.906264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.906371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.906509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.906535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.906647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.906785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.906810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.906930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.907073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.907099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.907218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.907335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.907361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.907501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.907651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.907676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.907793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.907935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.907962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.908092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.908205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.908231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.908395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.908536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.908561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.908669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.908790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.908815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.908951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.909063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.909089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.909238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.909344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.909370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.909498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.909644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.909670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.909787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.909902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.909929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.910037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.910155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.910181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.910323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.910442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.910467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.910609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.910752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.910779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.910897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.911040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.911066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.911193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.911340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.911365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.911472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.911616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.911642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.911761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.911886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.911914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.641 [2024-07-13 06:22:01.912053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.912181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.641 [2024-07-13 06:22:01.912208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.641 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.912338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.912478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.912504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.912612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.912727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.912753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.912906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.913023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.913049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.913234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.913378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.913404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.913534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.913649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.913675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.913799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.913952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.913978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.914093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.914211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.914237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.914364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.914487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.914513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.914654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.914822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.914848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.914981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.915127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.915153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.915278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.915425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.915451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.915592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.915709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.915735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.915882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.916030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.916057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.916169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.916287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.916312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.916434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.916571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.916597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.916745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.916857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.916888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.917008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.917117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.917143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.917265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.917376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.917401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.917543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.917660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.917685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.917829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.917997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.918024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.918140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.918252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.918278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.918394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.918518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.918544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.918658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.918798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.918823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.918940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.919068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.919094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.919265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.919378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.919404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.919551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.919670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.919695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.919810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.919920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.919947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.642 [2024-07-13 06:22:01.920060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.920181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.642 [2024-07-13 06:22:01.920206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.642 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.920330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.920448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.920474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.920582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.920699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.920725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.920836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.920954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.920981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.921134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.921251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.921277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.921418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.921530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.921555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.921698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.921817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.921843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.921962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.922094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.922120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.922262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.922388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.922414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.922557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.922697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.922722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.922830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.922979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.923006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.923129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.923241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.923267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.923410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.923556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.923582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.923703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.923848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.923880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.923994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.924142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.924167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.924279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.924394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.924420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.924542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.924648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.924674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.924811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.924928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.924955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.925063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.925207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.925233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.925357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.925497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.925523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.925638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.925784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.925811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.925954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.926077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.926104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.926244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.926379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.926405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.926522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.926661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.643 [2024-07-13 06:22:01.926687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.643 qpair failed and we were unable to recover it. 00:26:55.643 [2024-07-13 06:22:01.926825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.926949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.926976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.927105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.927247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.927273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.927418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.927526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.927552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.927693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.927844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.927875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.927994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.928133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.928162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.928292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.928416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.928442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.928578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.928696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.928723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.928878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.929055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.929081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.929195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.929336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.929362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.929530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.929647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.929673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.929787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.929912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.929938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.930052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.930178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.930203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.930321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.930443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.930470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.930622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.930766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.930792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.930913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.931062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.931092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.931211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.931325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.931352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.931505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.931613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.931639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.931765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.931914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.931941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.932057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.932200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.932226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.932344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.932484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.932509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.932649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.932792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.932819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.932961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.933070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.933095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.933218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.933330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.933357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.933474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.933590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.933616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.933755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.933895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.933925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.934055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.934222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.934248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.934367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.934495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.934520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.934642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.934759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.934786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.934915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.935030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.644 [2024-07-13 06:22:01.935056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.644 qpair failed and we were unable to recover it. 00:26:55.644 [2024-07-13 06:22:01.935208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.935324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.935350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.935464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.935604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.935630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.935746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.935896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.935934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.936053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.936196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.936222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.936334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.936481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.936508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.936623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.936795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.936825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.936962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.937077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.937103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.937232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.937351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.937377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.937493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.937642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.937668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.937803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.937933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.937960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.938098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.938219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.938245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.938353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.938471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.938497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.938664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.938778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.938804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.938953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.939069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.939096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.939228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.939342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.939368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.939518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.939665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.939691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.939804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.939939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.939974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.940102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.940228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.940257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.940371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.940498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.940523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.940647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.940792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.940817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.940932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.941073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.941098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.941233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.941352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.941378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.941526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.941639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.941666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.941789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.941932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.941959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.942128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.942284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.942309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.942426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.942539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.942565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.942712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.942852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.942884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.943006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.943144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.943179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.943299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.943421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.645 [2024-07-13 06:22:01.943452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.645 qpair failed and we were unable to recover it. 00:26:55.645 [2024-07-13 06:22:01.943602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.943745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.943774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.943888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.944007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.944033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.944156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.944283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.944309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.944432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.944600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.944626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.944736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.944844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.944878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.945028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.945142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.945171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.945286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.945437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.945462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.945637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.945763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.945789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.945931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.946077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.946103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.946224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.946350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.946375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.946518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.946659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.946685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.946814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.946939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.946966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.947139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.947281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.947306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.947421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.947544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.947570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.947677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.947814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.947840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.947963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.948096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.948122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.948229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.948382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.948407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.948538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.948675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.948700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.948815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.948974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.949012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.949132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.949255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.949282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.949401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.949542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.949568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.949737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.949850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.949887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.950052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.950157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.950183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.950303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.950448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.950474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.950616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.950724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.950750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.950878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.951001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.951027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.951173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.951314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.951339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.951467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.951579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.951605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.951745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.951875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.951902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.646 qpair failed and we were unable to recover it. 00:26:55.646 [2024-07-13 06:22:01.952022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.646 [2024-07-13 06:22:01.952140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.952165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.952282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.952429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.952455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.952575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.952713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.952739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.952873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.952983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.953009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.953138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.953256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.953282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.953432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.953546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.953571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.953719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.953832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.953859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.953981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.954107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.954133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.954278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.954398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.954424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.954533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.954639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.954664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.954838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.954961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.954988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.955139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.955251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.955276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.955431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.955553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.955579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.955695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.955837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.955863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.956011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.956134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.956171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.956313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.956437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.956463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.956591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.956742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.956768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.956886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.957027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.957052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.957208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.957330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.957355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.957500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.957613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.957639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.957749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.957890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.957917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.958065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.958176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.958202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.958349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.958459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.958497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.647 [2024-07-13 06:22:01.958622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.958767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.647 [2024-07-13 06:22:01.958793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.647 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.958919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.959041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.959067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.959213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.959330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.959355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.959475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.959599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.959624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.959744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.959889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.959915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.960036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.960178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.960204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.960351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.960469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.960496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.960638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.960742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.960768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.960888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.961064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.961090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.961208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.961336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.961361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.961484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.961655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.961680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.961795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.961953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.961980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.962095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.962238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.962263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.962383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.962526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.962551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.962693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.962840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.962872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.963019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.963142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.963168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.963296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.963447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.963472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.963609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.963758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.963784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.963928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.964046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.964072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.964205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.964347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.964373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.964522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.964640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.964667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.964788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.964909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.964936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.965084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.965203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.965233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.965376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.965486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.965511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.965630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.965772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.965798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.965918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.966067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.966092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.966239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.966366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.966393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.966560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.966667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.966692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.966837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.966993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.967019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.648 qpair failed and we were unable to recover it. 00:26:55.648 [2024-07-13 06:22:01.967140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.967327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.648 [2024-07-13 06:22:01.967353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.967473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.967579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.967605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.967756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.967898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.967925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.968047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.968190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.968215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.968354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.968501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.968526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.968673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.968784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.968809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.968928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.969044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.969071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.969220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.969331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.969356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.969500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.969609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.969634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.969756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.969913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.969940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.970080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.970219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.970245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.970385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.970501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.970527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.970664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.970783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.970810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.970936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.971049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.971075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.971222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.971393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.971419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.971541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.971685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.971711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.971852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.971980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.972006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.972144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.972284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.972311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.972426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.972570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.972596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.972720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.972891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.972918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.973062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.973186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.973212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.973328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.973452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.973477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.973595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.973711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.973738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.973880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.973999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.974025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.974176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.974313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.974338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.974495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.974601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.974626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.974740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.974894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.974924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.975097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.975219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.975245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.975363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.975500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.649 [2024-07-13 06:22:01.975526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.649 qpair failed and we were unable to recover it. 00:26:55.649 [2024-07-13 06:22:01.975646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.975785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.975811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.975941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.976065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.976091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.976220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.976371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.976397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.976511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.976627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.976652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.976773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.976959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.976986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.977126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.977242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.977268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.977418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.977557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.977582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.977718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.977841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.977884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.978007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.978176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.978201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.978346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.978457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.978483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.978649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.978754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.978780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.978931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.979050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.979076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.979222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.979368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.979394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.979512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.979648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.979673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.979813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.979989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.980015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.980124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.980275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.980300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.980422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.980560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.980585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.980702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.980845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.980888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.981034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.981143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.981176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.981320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.981434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.981460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.981579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.981720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.981746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.981908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.982051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.982077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.982251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.982396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.982423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.982546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.982655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.982681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.982821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.982932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.982958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.983082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.983227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.983254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.983366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.983492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.983519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.983653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.983798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.983827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.983965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.984078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.984104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.650 qpair failed and we were unable to recover it. 00:26:55.650 [2024-07-13 06:22:01.984253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.650 [2024-07-13 06:22:01.984393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.984418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.984542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.984691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.984716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.984841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.984967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.984993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.985103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.985263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.985289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.985397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.985517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.985542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.985653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.985758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.985784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.985904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.986048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.986075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.986218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.986335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.986361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.986475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.986585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.986610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.986757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.986864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.986911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.987021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.987166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.987193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.987315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.987420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.987446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.987584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.987699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.987725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.987835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.987959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.987985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.988097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.988213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.988239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.988378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.988495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.988520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.988642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.988773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.988800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.988953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.989069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.989095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.989239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.989358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.989385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.989507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.989644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.989670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.989780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.989930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.989957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.990107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.990248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.990274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.990398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.990518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.990544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.990654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.990773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.990798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.990940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.991079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.991104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.991213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.991354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.991380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.991490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.991601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.991627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.991743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.991909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.991935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.992045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.992160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.651 [2024-07-13 06:22:01.992187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.651 qpair failed and we were unable to recover it. 00:26:55.651 [2024-07-13 06:22:01.992333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.992445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.992471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.992622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.992774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.992801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.992960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.993098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.993124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.993261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.993402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.993428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.993535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.993688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.993714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.993836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.993969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.993997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.994109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.994220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.994245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.994376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.994510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.994536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.994658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.994769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.994795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.994919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.995061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.995087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.995245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.995413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.995438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.995550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.995691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.995717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.995872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.996018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.996044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.996153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.996274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.996301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.996425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.996548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.996574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.996693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.996861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.996893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.997013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.997121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.997147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.997305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.997415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.997441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.997564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.997703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.997729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.997889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.998032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.998058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.998211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.998356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.998381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.998499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.998613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.998638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.998752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.998891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.998918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.999038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.999153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.652 [2024-07-13 06:22:01.999179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.652 qpair failed and we were unable to recover it. 00:26:55.652 [2024-07-13 06:22:01.999286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:01.999398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:01.999425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:01.999572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:01.999692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:01.999718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:01.999860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:01.999985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.000011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.000152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.000273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.000300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.000415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.000534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.000561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.000713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.000818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.000844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.000998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.001119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.001149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.001330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.001445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.001472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.001616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.001758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.001785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.001902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.002027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.002053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.002175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.002312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.002338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.002472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.002588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.002614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.002734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.002877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.002904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.003046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.003159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.003186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.003299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.003424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.003450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.003572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.003742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.003768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.003925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.004036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.004062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.004183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.004319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.004345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.004497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.004635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.004660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.004827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.004963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.004990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.005104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.005238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.005263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.005404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.005529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.005554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.005671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.005809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.005834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.005998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.006115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.006141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.006286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.006407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.006435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.006584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.006730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.006756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.006903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.007026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.007053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.007201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.007320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.007346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.653 qpair failed and we were unable to recover it. 00:26:55.653 [2024-07-13 06:22:02.007462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.653 [2024-07-13 06:22:02.007631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.007657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.007772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.007921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.007948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.008099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.008216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.008243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.008364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.008488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.008514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.008625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.008733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.008759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.008915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.009027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.009053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.009193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.009338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.009363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.009470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.009592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.009619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.009786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.009914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.009941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.010080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.010248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.010274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.010386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.010528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.010554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.010691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.010815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.010840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.010987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.011096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.011122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.011242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.011352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.011377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.011543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.011649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.011675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.011817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.011944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.011970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.012117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.012273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.012306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.012427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.012530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.012555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.012696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.012823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.012848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.012967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.013118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.013144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.013289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.013399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.013426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.013570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.013673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.013699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.013841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.013993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.014020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.014137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.014260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.014286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.014428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.014573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.014599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.014709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.014826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.014852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.014974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.015119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.015144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.015266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.015378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.015404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.015544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.015668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.015694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.654 qpair failed and we were unable to recover it. 00:26:55.654 [2024-07-13 06:22:02.015841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.654 [2024-07-13 06:22:02.015994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.016020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.016161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.016305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.016331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.016447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.016600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.016627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.016740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.016871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.016898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.017040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.017157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.017192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.017300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.017444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.017470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.017619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.017764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.017790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.017933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.018044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.018070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.018187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.018328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.018354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.018475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.018588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.018614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.018730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.018879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.018907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.019024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.019150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.019175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.019285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.019394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.019420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.019568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.019687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.019713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.019830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.019958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.019985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.020114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.020259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.020285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.020446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.020586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.020612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.020758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.020908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.020935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.021049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.021190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.021216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.021354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.021472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.021502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.021630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.021736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.021761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.021913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.022027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.022053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.022222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.022375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.022400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.022512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.022630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.022657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.022765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.022914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.022940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.023083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.023201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.023227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.023369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.023477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.023502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.023647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.023762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.023788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.023919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.024029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.024055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.655 qpair failed and we were unable to recover it. 00:26:55.655 [2024-07-13 06:22:02.024196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.655 [2024-07-13 06:22:02.024323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.024353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.024471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.024613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.024639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.024792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.024914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.024941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.025082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.025200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.025225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.025344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.025464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.025490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.025663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.025775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.025801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.025927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.026042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.026068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.026214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.026325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.026350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.026493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.026630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.026656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.026802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.026950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.026978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.027089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.027242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.027272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.027414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.027581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.027607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.027720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.027836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.027863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.028007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.028150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.028176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.028320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.028444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.028470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.028608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.028785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.028811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.028963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.029084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.029109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.029238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.029382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.029408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.029526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.029663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.029689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.029828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.029996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.030023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.030132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.030270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.030300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.030438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.030544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.030570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.030745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.030855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.030889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.031017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.031185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.031211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.031349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.031458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.031484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.031659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.031803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.031830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.031951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.032072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.032099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.032245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.032361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.032387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.656 qpair failed and we were unable to recover it. 00:26:55.656 [2024-07-13 06:22:02.032505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.656 [2024-07-13 06:22:02.032616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.032642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.032794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.032930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.032958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.033102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.033219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.033245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.033399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.033552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.033578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.033694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.033838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.033870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.033989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.034146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.034172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.034319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.034490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.034515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.034640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.034783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.034809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.034918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.035038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.035063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.035180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.035350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.035375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.035482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.035624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.035649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.035825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.035948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.035975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.036098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.036213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.036239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.036387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.036508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.036534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.036646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.036786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.036812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.036933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.037080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.037106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.037259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.037398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.037423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.037562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.037673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.037699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.037816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.037968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.037995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.038162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.038304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.038330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.038481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.038600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.038626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.038767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.038920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.038948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.039056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.039197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.039222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.039362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.039530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.039556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.039704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.039845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.039876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.657 [2024-07-13 06:22:02.039989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.040131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.657 [2024-07-13 06:22:02.040158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.657 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.040280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.040393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.040418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.040668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.040795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.040821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.040947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.041061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.041088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.041213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.041352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.041379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.041507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.041654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.041681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.041804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.041946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.041973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.042112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.042255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.042281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.042423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.042540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.042566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.042689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.042829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.042855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.042971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.043078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.043103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.043222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.043347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.043373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.043540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.043673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.043698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.043813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.043928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.043955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.044082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.044220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.044245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.044355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.044494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.044519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.044625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.044734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.044760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.044911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.045050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.045076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.045229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.045346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.045372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.045513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.045634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.045660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.045782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.045923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.045951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.046091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.046230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.046256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.046397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.046539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.046564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.046690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.046802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.046828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.046975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.047113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.047139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.047254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.047369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.047394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.047504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.047617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.047643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.047754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.047860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.047890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.048004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.048176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.048201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.658 qpair failed and we were unable to recover it. 00:26:55.658 [2024-07-13 06:22:02.048344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.048458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.658 [2024-07-13 06:22:02.048484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.048604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.048713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.048739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.048854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.048980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.049005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.049162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.049316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.049341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.049465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.049608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.049634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.049775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.049924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.049950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.050074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.050183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.050209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.050317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.050443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.050469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.050593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.050732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.050758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.050887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.051003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.051029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.051176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.051284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.051311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.051422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.051526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.051552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.051693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.051803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.051829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.051954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.052066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.052092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.052204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.052371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.052397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.052519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.052654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.052680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.052795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.052924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.052951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.053119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.053233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.053259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.053377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.053544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.053570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.053721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.053895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.053921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.054028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.054150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.054176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.054284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.054424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.054449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.054572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.054707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.054732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.054872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.055016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.055044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.055214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.055324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.055349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.055488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.055625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.055651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.055765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.055876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.055903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.056013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.056159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.056185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.056326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.056433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.056459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.659 qpair failed and we were unable to recover it. 00:26:55.659 [2024-07-13 06:22:02.056565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.659 [2024-07-13 06:22:02.056679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.056705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.056846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.056998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.057024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.057176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.057318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.057344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.057469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.057608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.057633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.057761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.057906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.057933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.058048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.058163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.058190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.058301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.058411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.058437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.058549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.058713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.058739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.058856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.058976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.059003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.059110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.059276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.059302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.059419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.059531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.059557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.059697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.059831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.059857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.060011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.060127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.060153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.060277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.060402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.060428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.060549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.060666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.060693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.060859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.061019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.061044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.061188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.061304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.061330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.061455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.061625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.061650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.061770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.061891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.061918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.062028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.062144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.062170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.062294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.062412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.062438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.062548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.062693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.062719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.062830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.062958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.062985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.063102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.063214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.063239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.063350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.063520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.063546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.063662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.063787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.063813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.063942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.064060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.064087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.064265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.064408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.064434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.064549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.064691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.064718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.660 qpair failed and we were unable to recover it. 00:26:55.660 [2024-07-13 06:22:02.064840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.064999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.660 [2024-07-13 06:22:02.065025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.065139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.065259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.065288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.065459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.065569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.065595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.065716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.065885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.065914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.066062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.066185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.066211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.066326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.066469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.066497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.066640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.066762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.066788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.066912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.067029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.067055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.067206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.067315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.067341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.067453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.067575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.067601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.067771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.067891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.067918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.068065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.068204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.068234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.068371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.068493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.068519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.068637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.068781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.068808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.068960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.069103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.069129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.069241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.069366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.069392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.069564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.069714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.069740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.069852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.069978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.070003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.070149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.070293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.070319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.070442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.070580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.070606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.070751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.070862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.070895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.071008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.071123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.071154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.071271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.071396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.071422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.071537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.071678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.071704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.661 qpair failed and we were unable to recover it. 00:26:55.661 [2024-07-13 06:22:02.071814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.661 [2024-07-13 06:22:02.071966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.071994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.072141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.072294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.072321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.072488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.072599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.072625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.072738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.072877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.072904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.073024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.073174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.073200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.073307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.073433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.073460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.073626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.073795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.073821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.073944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.074084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.074115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.074259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.074379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.074405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.074534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.074676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.074702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.074848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.074969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.074995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.075115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.075229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.075255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.075365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.075506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.075531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.075685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.075796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.075822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.075972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.076087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.076113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.076270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.076393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.076418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.076555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.076696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.076722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.076832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.076954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.076980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.077093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.077202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.077228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.077344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.077484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.077510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.077651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.077756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.077781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.077898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.078036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.078062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.078183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.078332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.078358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.078481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.078650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.078677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.078788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.078940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.078967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.079090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.079257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.079283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.079427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.079538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.079564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.662 [2024-07-13 06:22:02.079671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.079791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.662 [2024-07-13 06:22:02.079818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.662 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.079977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.080091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.080116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.080252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.080399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.080425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.080563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.080735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.080760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.080886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.081003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.081029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.081141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.081282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.081308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.081422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.081530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.081557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.081674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.081812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.081838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.081960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.082129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.082154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.082264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.082402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.082427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.082543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.082690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.082715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.082855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.082971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.082997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.083115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.083236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.083261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.083372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.083501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.083527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.083649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.083791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.083817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.083959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.084079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.084105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.084231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.084377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.084405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.084531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.084652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.084678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.084785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.084928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.084955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.085094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.085229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.085255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.085376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.085529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.085554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.085673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.085824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.085850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.085974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.086124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.086150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.086260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.086396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.086422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.086550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.086698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.086723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.086844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.086959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.086985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.087103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.087247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.087273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.087414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.087552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.087578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.663 qpair failed and we were unable to recover it. 00:26:55.663 [2024-07-13 06:22:02.087736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.663 [2024-07-13 06:22:02.087848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.087890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.088016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.088172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.088198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.088311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.088480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.088506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.088626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.088734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.088759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.088918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.089035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.089061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.089181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.089304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.089329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.089446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.089592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.089617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.089763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.089880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.089907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.090035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.090160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.090187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.090310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.090434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.090460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.090576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.090717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.090744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.090850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.090996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.091022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.091166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.091282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.091308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.091420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.091531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.091557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.091676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.091790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.091817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.091984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.092143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.092169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.092320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.092439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.092465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.092603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.092745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.092772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.092923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.093061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.093087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.093207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.093358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.093384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.093528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.093672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.093699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.093851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.094008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.094034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.094158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.094285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.094312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.094473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.094593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.094619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.094741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.094879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.094905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.095020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.095164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.095191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.095309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.095455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.095481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.095631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.095766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.095792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.095939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.096053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.096080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.096193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.096311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.664 [2024-07-13 06:22:02.096345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.664 qpair failed and we were unable to recover it. 00:26:55.664 [2024-07-13 06:22:02.096488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.096637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.096662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.096809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.096931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.096958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.097094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.097230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.097256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.097374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.097544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.097570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.097677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.097794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.097820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.097932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.098045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.098071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.098219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.098340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.098365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.098479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.098610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.098636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.098755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.098927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.098953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.099063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.099197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.099222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.099340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.099511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.099537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.099679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.099798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.099825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.099972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.100145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.100183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.100300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.100410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.100436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.100560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.100706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.100732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.100881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.101025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.101051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.101206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.101384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.101410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.101560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.101700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.101726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.101871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.102006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.102032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.102162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.102287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.102314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.102432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.102570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.102596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.102718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.102834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.102859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.103003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.103121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.103147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.103295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.103410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.103445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.103597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.103746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.103773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.103913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.104030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.104066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.104209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.104355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.104387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.104531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.104657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.104684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.104797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.104909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.104936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.665 qpair failed and we were unable to recover it. 00:26:55.665 [2024-07-13 06:22:02.105090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.665 [2024-07-13 06:22:02.105202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.105228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.105395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.105529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.105555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.105681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.105819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.105846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.105981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.106100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.106126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.106308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.106430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.106456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.106578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.106691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.106718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.106876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.106997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.107024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.107139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.107296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.107323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.107477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.107604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.107630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.107751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.107892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.107919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.108067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.108177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.108203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.108316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.108430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.108457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.108613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.108731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.108757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.108917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.109064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.109090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.109201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.109349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.109376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.109537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.109678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.109704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.109844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.109988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.110014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.110135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.110257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.110283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.110396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.110532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.110558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.110716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.110892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.110918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.111029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.111195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.111220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.111334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.111449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.111476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.111636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.111784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.111811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.111981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.112147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.112172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.112282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.112400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.112430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.666 qpair failed and we were unable to recover it. 00:26:55.666 [2024-07-13 06:22:02.112570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.666 [2024-07-13 06:22:02.112721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.112747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.112889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.113046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.113073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.113213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.113363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.113389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.113530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.113654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.113680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.113825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.113987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.114014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.114158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.114280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.114306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.114416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.114560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.114586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.114710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.114851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.114884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.115024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.115146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.115171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.115291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.115432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.115473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.115591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.115734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.115760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.115877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.115990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.116015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.116162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.116303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.116348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.116503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.116646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.116672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.116810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.116951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.116978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.117122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.117244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.117270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.117414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.117557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.117582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.117695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.117834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.117878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.118049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.118250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.118279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.118413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.118538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.118569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.118721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.118844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.118886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.119040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.119191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.119216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.119390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.119510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.119537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.667 [2024-07-13 06:22:02.119644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.119769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.667 [2024-07-13 06:22:02.119806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.667 qpair failed and we were unable to recover it. 00:26:55.932 [2024-07-13 06:22:02.119992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.120113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.120140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.932 qpair failed and we were unable to recover it. 00:26:55.932 [2024-07-13 06:22:02.120255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.120375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.120401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.932 qpair failed and we were unable to recover it. 00:26:55.932 [2024-07-13 06:22:02.120541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.120648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.120674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.932 qpair failed and we were unable to recover it. 00:26:55.932 [2024-07-13 06:22:02.120789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.120934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.120961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.932 qpair failed and we were unable to recover it. 00:26:55.932 [2024-07-13 06:22:02.121079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.121199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.121225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.932 qpair failed and we were unable to recover it. 00:26:55.932 [2024-07-13 06:22:02.121377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.121489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.121520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.932 qpair failed and we were unable to recover it. 00:26:55.932 [2024-07-13 06:22:02.121746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.121889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.121915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.932 qpair failed and we were unable to recover it. 00:26:55.932 [2024-07-13 06:22:02.122039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.122169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.122196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.932 qpair failed and we were unable to recover it. 00:26:55.932 [2024-07-13 06:22:02.122319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.122467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.122493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.932 qpair failed and we were unable to recover it. 00:26:55.932 [2024-07-13 06:22:02.122605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.122729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.122756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.932 qpair failed and we were unable to recover it. 00:26:55.932 [2024-07-13 06:22:02.122890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.123053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.123080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.932 qpair failed and we were unable to recover it. 00:26:55.932 [2024-07-13 06:22:02.123219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.123365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.123392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.932 qpair failed and we were unable to recover it. 00:26:55.932 [2024-07-13 06:22:02.123507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.123628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.123655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.932 qpair failed and we were unable to recover it. 00:26:55.932 [2024-07-13 06:22:02.123800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.123920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.123946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.932 qpair failed and we were unable to recover it. 00:26:55.932 [2024-07-13 06:22:02.124105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.124253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.124279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.932 qpair failed and we were unable to recover it. 00:26:55.932 [2024-07-13 06:22:02.124434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.124586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.124612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.932 qpair failed and we were unable to recover it. 00:26:55.932 [2024-07-13 06:22:02.124730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.932 [2024-07-13 06:22:02.124841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.124874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.124988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.125104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.125131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.125245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.125393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.125419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.125533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.125674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.125700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.125832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.125952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.125979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.126128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.126238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.126264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.126409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.126532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.126559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.126667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.126818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.126845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.126976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.127096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.127124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.127268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.127375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.127402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.127565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.127706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.127732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.127855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.128034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.128061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.128185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.128308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.128334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.128483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.128621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.128647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.128763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.128903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.128930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.129052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.129168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.129196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.129350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.129502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.129528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.129667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.129824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.129850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.130004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.130123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.130149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.130261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.130374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.130400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.130522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.130636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.130661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.130784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.130914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.130942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.131064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.131190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.131218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.131361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.131475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.131502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.131618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.131735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.131761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.131884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.132002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.132028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.132178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.132318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.132344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.132489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.132609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.132635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.132747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.132877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.132904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.133024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.133170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.133196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.933 [2024-07-13 06:22:02.133315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.133446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.933 [2024-07-13 06:22:02.133473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.933 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.133586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.133730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.133756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.133922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.134041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.134068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.134191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.134332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.134358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.134484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.134603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.134629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.134813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.134955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.134982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.135101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.135212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.135238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.135394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.135537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.135563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.135678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.135824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.135850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.135986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 06:22:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:55.934 [2024-07-13 06:22:02.136102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.136128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b9 06:22:02 -- common/autotest_common.sh@852 -- # return 0 00:26:55.934 0 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.136286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 06:22:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:55.934 06:22:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:55.934 [2024-07-13 06:22:02.136404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.136431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 06:22:02 -- common/autotest_common.sh@10 -- # set +x 00:26:55.934 [2024-07-13 06:22:02.136554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.136670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.136696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.136831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.136961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.136986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.137101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.137258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.137283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.137436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.137578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.137604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.137714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.137819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.137845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d94000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.137987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.138143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.138175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.138334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.138476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.138503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.138631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.138759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.138787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.138922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.139049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.139087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.139234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.139382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.139410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.139552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.139676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.139704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.139822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.139959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.139994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.140118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.140230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.140257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.140378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.140517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.140547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.140696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.140841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.140875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.141059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.141169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.141196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.141348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.141494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.141522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.141634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.141783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.141809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.141947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.142094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.142121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.934 qpair failed and we were unable to recover it. 00:26:55.934 [2024-07-13 06:22:02.142257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.934 [2024-07-13 06:22:02.142405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.142432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.142601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.142737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.142765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.142922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.143054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.143082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.143239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.143354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.143382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.143497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.143626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.143654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.143785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.143944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.143972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.144105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.144226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.144260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.144456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.144574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.144602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.144731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.144849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.144883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.145000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.145130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.145158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.145298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.145412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.145443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.145604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.145724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.145751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.145915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.146040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.146067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.146177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.146322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.146350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.146497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.146650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.146677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.146815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.146966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.146993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.147113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.147276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.147306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.147433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.147583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.147613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.147730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.147857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.147923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.148084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.148245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.148276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.148394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.148530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.148560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.148691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.148843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.148876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.149046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.149180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.149207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.149362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.149480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.149507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.149685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.149813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.149842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.149980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.150096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.150123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.150237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.150395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.150423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.150565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.150683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.150711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.150843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.150974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.151011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.151164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.151295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.151327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.935 [2024-07-13 06:22:02.151457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.151578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.935 [2024-07-13 06:22:02.151609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.935 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.151750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.151879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.151907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.152027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.152141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.152171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.152301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.152449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.152478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.152598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.152769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.152796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.152951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.153101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.153128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.153284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.153433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.153459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.153577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.153692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.153728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.153858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.153979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.154007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.154149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.154319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.154350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.154463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.154607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.154639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.154776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.154932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.154966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.155092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.155207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.155233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.155344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.155516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.155545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.155670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.155852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.155889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.156032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.156164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.156190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.156314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.156443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.156470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.156643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.156793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.156821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.156950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.157069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.157096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.157219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.157343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.157370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.157500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.157629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.157657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.157802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.157922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.157949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.158122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.158279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.158305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.158446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 06:22:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.936 [2024-07-13 06:22:02.158604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.158636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 [2024-07-13 06:22:02.158751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 06:22:02 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:55.936 [2024-07-13 06:22:02.158895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.936 [2024-07-13 06:22:02.158935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.936 qpair failed and we were unable to recover it. 00:26:55.936 06:22:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.936 [2024-07-13 06:22:02.159045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.159179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.159206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 06:22:02 -- common/autotest_common.sh@10 -- # set +x 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.159378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.159511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.159538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.159658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.159799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.159825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.159952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.160074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.160100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.160271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.160417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.160444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.160595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.160738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.160765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.160879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.160989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.161016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.161167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.161287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.161314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.161433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.161579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.161606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.161727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.161850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.161883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.162009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.162125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.162158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.162280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.162427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.162455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.162604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.162776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.162803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.162950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.163099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.163125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.163264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.163392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.163420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.163567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.163713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.163749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.163900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.164057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.164084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.164232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.164362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.164388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.164503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.164658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.164686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.164918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.165052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.165079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.165213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.165362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.165388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.165544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.165668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.165696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.165828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.165984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.166010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.166153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.166277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.166305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.166464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.166579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.166606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.166749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.166877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.166904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.167041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.167156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.167183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.167350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.167492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.167519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.167689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.167837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.167901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.168042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.168171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.168198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.937 qpair failed and we were unable to recover it. 00:26:55.937 [2024-07-13 06:22:02.168344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.937 [2024-07-13 06:22:02.168492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.168517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.168633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.168763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.168790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.168929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.169072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.169099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.169255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.169398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.169425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.169579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.169703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.169730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.169882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.170045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.170071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.170235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.170382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.170408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.170551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.170698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.170724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.170833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.170982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.171008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.171159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.171318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.171345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.171528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.171666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.171692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.171844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.171966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.172000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.172198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.172348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.172376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.172524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.172678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.172705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.172887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.173024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.173052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.173177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.173336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.173365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.173486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.173634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.173661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.173811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.173964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.173991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.174114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.174267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.174295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.174449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.174564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.174591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.174721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.174864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.174935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.175098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.175223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.175250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.175419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.175533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.175559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.175711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.175834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.175862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.176028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.176169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.176195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.176352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.176505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.176533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.176678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.176831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.176874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.177034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.177186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.177213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.177368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.177502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.177528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.177684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.177802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.177828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.177984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.178131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.938 [2024-07-13 06:22:02.178157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.938 qpair failed and we were unable to recover it. 00:26:55.938 [2024-07-13 06:22:02.178330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.178441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.178468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.178614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.178760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.178788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.178945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.179061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.179088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.179239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.179422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.179448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.179578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.179713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.179741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.179887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.180063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.180090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.180250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.180378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.180405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.180585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.180732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.180759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.180888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.181014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.181040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.181163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.181292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.181319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.181469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.181622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.181648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.181767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.181897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.181925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.182051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.182162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.182188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.182300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.182456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.182483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.182601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.182754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.182780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.182937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.183075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.183111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.183267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.183387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.183414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.183556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.183689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.183716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.183858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.184014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.184039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.184176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.184296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.184323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.184471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.184607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.184632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.184754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 Malloc0 00:26:55.939 [2024-07-13 06:22:02.184874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.184901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.185052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 06:22:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.939 [2024-07-13 06:22:02.185177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.185204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 06:22:02 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:55.939 [2024-07-13 06:22:02.185317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 06:22:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.939 [2024-07-13 06:22:02.185486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.185513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 06:22:02 -- common/autotest_common.sh@10 -- # set +x 00:26:55.939 [2024-07-13 06:22:02.185653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.185766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.185793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.185920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.186089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.186116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.186304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.186449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.186476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.186629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.186806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.186832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.187000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.187143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.187170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.939 [2024-07-13 06:22:02.187345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.187473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.939 [2024-07-13 06:22:02.187498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.939 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.187611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.187721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.187748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.187941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.188064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.188091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.188263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.188407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.188433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.188481] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.940 [2024-07-13 06:22:02.188566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.188692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.188719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.188845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.189004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.189030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.189179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.189297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.189324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.189468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.189593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.189621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.189750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.189898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.189924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.190100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.190220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.190247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.190365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.190509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.190535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.190678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.190821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.190847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.190990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.191154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.191180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.191329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.191440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.191465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.191620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.191728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.191754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.191882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.192026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.192053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.192172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.192295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.192323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.192458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.192583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.192609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.192786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.192919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.192946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.193062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.193180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.193206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.193351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.193473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.193499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.193660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.193811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.193838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.193991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.194138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.194164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.194310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.194421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.194448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.194574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.194725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.194753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.194882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.195011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.195036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.195182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.195304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.195330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.195559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.195671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.195697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.195817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.195932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.195959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.196104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.196253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.196279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.940 qpair failed and we were unable to recover it. 00:26:55.940 [2024-07-13 06:22:02.196414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.196531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.940 [2024-07-13 06:22:02.196558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.196702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 06:22:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.941 [2024-07-13 06:22:02.196838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.196872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 06:22:02 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:55.941 06:22:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.941 [2024-07-13 06:22:02.197020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 06:22:02 -- common/autotest_common.sh@10 -- # set +x 00:26:55.941 [2024-07-13 06:22:02.197175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.197201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.197321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.197456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.197482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.197655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.197763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.197788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.197914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.198031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.198057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.198190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.198348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.198374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.198496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.198625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.198651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.198775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.198918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.198945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.199094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.199239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.199265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.199383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.199510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.199535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.199678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.199796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.199822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.199947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.200088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.200113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.200261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.200397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.200424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.200559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.200700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.200726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.200843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.201004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.201031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.201187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.201306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.201332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.201450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.201600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.201626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.201739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.201883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.201910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.202066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.202206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.202231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.202376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.202524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.202550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.202696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.202875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.202902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.203024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.203142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.203168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.203298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.203451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.203477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.203625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.203772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.203798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.203928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.204046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.204073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.941 [2024-07-13 06:22:02.204185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.204324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.941 [2024-07-13 06:22:02.204350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.941 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.204494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.204641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.204668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.204789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 06:22:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.942 [2024-07-13 06:22:02.204938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 06:22:02 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:55.942 [2024-07-13 06:22:02.204966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 06:22:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.942 [2024-07-13 06:22:02.205113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 06:22:02 -- common/autotest_common.sh@10 -- # set +x 00:26:55.942 [2024-07-13 06:22:02.205256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.205282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.205407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.205530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.205556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.205698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.205847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.205882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.206022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.206159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.206189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.206333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.206464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.206491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.206633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.206748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.206773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.206888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.207001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.207026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.207153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.207293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.207318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.207460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.207584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.207609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.207738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.207876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.207903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.208025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.208164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.208190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.208340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.208475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.208501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.208660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.208804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.208830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.208986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.209110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.209141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.209293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.209434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.209459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.209574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.209724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.209749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.209896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.210025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.210050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.210190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.210307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.210332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.210478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.210601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.210628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.210738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.210880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.210907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.211035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.211149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.211176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.211347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.211496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.211524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.211642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.211816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.211841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.211990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.212140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.212170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.212310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.212428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.212454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 [2024-07-13 06:22:02.212609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.212726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.212754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 06:22:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.942 [2024-07-13 06:22:02.212912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 06:22:02 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.942 06:22:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.942 [2024-07-13 06:22:02.213056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.942 [2024-07-13 06:22:02.213081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.942 qpair failed and we were unable to recover it. 00:26:55.942 06:22:02 -- common/autotest_common.sh@10 -- # set +x 00:26:55.943 [2024-07-13 06:22:02.213208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.213353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.213378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 [2024-07-13 06:22:02.213511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.213635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.213660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 [2024-07-13 06:22:02.213814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.213940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.213968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 [2024-07-13 06:22:02.214120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.214288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.214314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 [2024-07-13 06:22:02.214461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.214630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.214654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 [2024-07-13 06:22:02.214774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.214926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.214953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 [2024-07-13 06:22:02.215105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.215225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.215252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 [2024-07-13 06:22:02.215400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.215539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.215565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 [2024-07-13 06:22:02.215735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.215883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.215910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 [2024-07-13 06:22:02.216037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.216195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.216222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 [2024-07-13 06:22:02.216341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.216514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:55.943 [2024-07-13 06:22:02.216539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8d8c000b90 with addr=10.0.0.2, port=4420 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 [2024-07-13 06:22:02.216823] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.943 [2024-07-13 06:22:02.219193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.943 [2024-07-13 06:22:02.219353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.943 [2024-07-13 06:22:02.219382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.943 [2024-07-13 06:22:02.219399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.943 [2024-07-13 06:22:02.219414] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.943 [2024-07-13 06:22:02.219449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 06:22:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.943 06:22:02 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:55.943 06:22:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.943 06:22:02 -- common/autotest_common.sh@10 -- # set +x 00:26:55.943 06:22:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.943 06:22:02 -- host/target_disconnect.sh@58 -- # wait 1236141 00:26:55.943 [2024-07-13 06:22:02.229083] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.943 [2024-07-13 06:22:02.229203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.943 [2024-07-13 06:22:02.229231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.943 [2024-07-13 06:22:02.229247] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.943 [2024-07-13 06:22:02.229268] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.943 [2024-07-13 06:22:02.229313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 [2024-07-13 06:22:02.239135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.943 [2024-07-13 06:22:02.239269] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.943 [2024-07-13 06:22:02.239297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.943 [2024-07-13 06:22:02.239313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.943 [2024-07-13 06:22:02.239327] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.943 [2024-07-13 06:22:02.239372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 [2024-07-13 06:22:02.249067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.943 [2024-07-13 06:22:02.249192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.943 [2024-07-13 06:22:02.249219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.943 [2024-07-13 06:22:02.249235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.943 [2024-07-13 06:22:02.249249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.943 [2024-07-13 06:22:02.249281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 [2024-07-13 06:22:02.259181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.943 [2024-07-13 06:22:02.259306] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.943 [2024-07-13 06:22:02.259334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.943 [2024-07-13 06:22:02.259350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.943 [2024-07-13 06:22:02.259364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.943 [2024-07-13 06:22:02.259405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 [2024-07-13 06:22:02.269105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.943 [2024-07-13 06:22:02.269232] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.943 [2024-07-13 06:22:02.269259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.943 [2024-07-13 06:22:02.269274] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.943 [2024-07-13 06:22:02.269287] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.943 [2024-07-13 06:22:02.269318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 [2024-07-13 06:22:02.279128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.943 [2024-07-13 06:22:02.279260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.943 [2024-07-13 06:22:02.279287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.943 [2024-07-13 06:22:02.279304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.943 [2024-07-13 06:22:02.279318] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.943 [2024-07-13 06:22:02.279348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 [2024-07-13 06:22:02.289140] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.943 [2024-07-13 06:22:02.289273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.943 [2024-07-13 06:22:02.289300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.943 [2024-07-13 06:22:02.289315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.943 [2024-07-13 06:22:02.289329] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.943 [2024-07-13 06:22:02.289361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.943 qpair failed and we were unable to recover it. 00:26:55.943 [2024-07-13 06:22:02.299198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.943 [2024-07-13 06:22:02.299338] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.943 [2024-07-13 06:22:02.299364] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.943 [2024-07-13 06:22:02.299379] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.944 [2024-07-13 06:22:02.299393] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.944 [2024-07-13 06:22:02.299423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.944 qpair failed and we were unable to recover it. 00:26:55.944 [2024-07-13 06:22:02.309248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.944 [2024-07-13 06:22:02.309377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.944 [2024-07-13 06:22:02.309404] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.944 [2024-07-13 06:22:02.309419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.944 [2024-07-13 06:22:02.309432] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.944 [2024-07-13 06:22:02.309463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.944 qpair failed and we were unable to recover it. 00:26:55.944 [2024-07-13 06:22:02.319252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.944 [2024-07-13 06:22:02.319421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.944 [2024-07-13 06:22:02.319448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.944 [2024-07-13 06:22:02.319471] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.944 [2024-07-13 06:22:02.319486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.944 [2024-07-13 06:22:02.319516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.944 qpair failed and we were unable to recover it. 00:26:55.944 [2024-07-13 06:22:02.329250] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.944 [2024-07-13 06:22:02.329373] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.944 [2024-07-13 06:22:02.329397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.944 [2024-07-13 06:22:02.329412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.944 [2024-07-13 06:22:02.329427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.944 [2024-07-13 06:22:02.329457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.944 qpair failed and we were unable to recover it. 00:26:55.944 [2024-07-13 06:22:02.339332] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.944 [2024-07-13 06:22:02.339457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.944 [2024-07-13 06:22:02.339482] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.944 [2024-07-13 06:22:02.339498] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.944 [2024-07-13 06:22:02.339512] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.944 [2024-07-13 06:22:02.339543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.944 qpair failed and we were unable to recover it. 00:26:55.944 [2024-07-13 06:22:02.349352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.944 [2024-07-13 06:22:02.349490] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.944 [2024-07-13 06:22:02.349516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.944 [2024-07-13 06:22:02.349531] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.944 [2024-07-13 06:22:02.349555] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.944 [2024-07-13 06:22:02.349584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.944 qpair failed and we were unable to recover it. 00:26:55.944 [2024-07-13 06:22:02.359394] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.944 [2024-07-13 06:22:02.359514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.944 [2024-07-13 06:22:02.359542] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.944 [2024-07-13 06:22:02.359558] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.944 [2024-07-13 06:22:02.359572] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.944 [2024-07-13 06:22:02.359618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.944 qpair failed and we were unable to recover it. 00:26:55.944 [2024-07-13 06:22:02.369438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.944 [2024-07-13 06:22:02.369596] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.944 [2024-07-13 06:22:02.369624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.944 [2024-07-13 06:22:02.369640] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.944 [2024-07-13 06:22:02.369654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.944 [2024-07-13 06:22:02.369684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.944 qpair failed and we were unable to recover it. 00:26:55.944 [2024-07-13 06:22:02.379414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.944 [2024-07-13 06:22:02.379534] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.944 [2024-07-13 06:22:02.379559] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.944 [2024-07-13 06:22:02.379575] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.944 [2024-07-13 06:22:02.379588] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.944 [2024-07-13 06:22:02.379619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.944 qpair failed and we were unable to recover it. 00:26:55.944 [2024-07-13 06:22:02.389469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.944 [2024-07-13 06:22:02.389597] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.944 [2024-07-13 06:22:02.389624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.944 [2024-07-13 06:22:02.389643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.944 [2024-07-13 06:22:02.389657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.944 [2024-07-13 06:22:02.389688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.944 qpair failed and we were unable to recover it. 00:26:55.944 [2024-07-13 06:22:02.399499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.944 [2024-07-13 06:22:02.399634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.944 [2024-07-13 06:22:02.399661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.944 [2024-07-13 06:22:02.399677] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.944 [2024-07-13 06:22:02.399691] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.944 [2024-07-13 06:22:02.399737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.944 qpair failed and we were unable to recover it. 00:26:55.944 [2024-07-13 06:22:02.409511] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.944 [2024-07-13 06:22:02.409656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.944 [2024-07-13 06:22:02.409684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.944 [2024-07-13 06:22:02.409705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.944 [2024-07-13 06:22:02.409720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.944 [2024-07-13 06:22:02.409750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.944 qpair failed and we were unable to recover it. 00:26:55.944 [2024-07-13 06:22:02.419547] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.944 [2024-07-13 06:22:02.419681] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.944 [2024-07-13 06:22:02.419707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.944 [2024-07-13 06:22:02.419723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.944 [2024-07-13 06:22:02.419736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.944 [2024-07-13 06:22:02.419767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.944 qpair failed and we were unable to recover it. 00:26:55.944 [2024-07-13 06:22:02.429590] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:55.944 [2024-07-13 06:22:02.429710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:55.944 [2024-07-13 06:22:02.429736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:55.944 [2024-07-13 06:22:02.429752] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:55.944 [2024-07-13 06:22:02.429766] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:55.944 [2024-07-13 06:22:02.429798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:55.944 qpair failed and we were unable to recover it. 00:26:56.203 [2024-07-13 06:22:02.439596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.203 [2024-07-13 06:22:02.439712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.203 [2024-07-13 06:22:02.439739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.203 [2024-07-13 06:22:02.439754] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.203 [2024-07-13 06:22:02.439769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.203 [2024-07-13 06:22:02.439799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.203 qpair failed and we were unable to recover it. 00:26:56.203 [2024-07-13 06:22:02.449655] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.203 [2024-07-13 06:22:02.449789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.203 [2024-07-13 06:22:02.449815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.203 [2024-07-13 06:22:02.449831] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.203 [2024-07-13 06:22:02.449857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.203 [2024-07-13 06:22:02.449896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.203 qpair failed and we were unable to recover it. 00:26:56.203 [2024-07-13 06:22:02.459694] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.203 [2024-07-13 06:22:02.459861] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.203 [2024-07-13 06:22:02.459902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.203 [2024-07-13 06:22:02.459920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.203 [2024-07-13 06:22:02.459933] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.203 [2024-07-13 06:22:02.459964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.203 qpair failed and we were unable to recover it. 00:26:56.203 [2024-07-13 06:22:02.469683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.203 [2024-07-13 06:22:02.469820] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.203 [2024-07-13 06:22:02.469847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.203 [2024-07-13 06:22:02.469862] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.203 [2024-07-13 06:22:02.469883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.204 [2024-07-13 06:22:02.469914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.204 qpair failed and we were unable to recover it. 00:26:56.204 [2024-07-13 06:22:02.479744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.204 [2024-07-13 06:22:02.479886] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.204 [2024-07-13 06:22:02.479916] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.204 [2024-07-13 06:22:02.479934] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.204 [2024-07-13 06:22:02.479948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.204 [2024-07-13 06:22:02.479981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.204 qpair failed and we were unable to recover it. 00:26:56.204 [2024-07-13 06:22:02.489734] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.204 [2024-07-13 06:22:02.489877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.204 [2024-07-13 06:22:02.489904] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.204 [2024-07-13 06:22:02.489920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.204 [2024-07-13 06:22:02.489935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.204 [2024-07-13 06:22:02.489966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.204 qpair failed and we were unable to recover it. 00:26:56.204 [2024-07-13 06:22:02.499793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.204 [2024-07-13 06:22:02.499931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.204 [2024-07-13 06:22:02.499963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.204 [2024-07-13 06:22:02.499980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.204 [2024-07-13 06:22:02.499995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.204 [2024-07-13 06:22:02.500026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.204 qpair failed and we were unable to recover it. 00:26:56.204 [2024-07-13 06:22:02.509816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.204 [2024-07-13 06:22:02.509961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.204 [2024-07-13 06:22:02.509988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.204 [2024-07-13 06:22:02.510003] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.204 [2024-07-13 06:22:02.510018] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.204 [2024-07-13 06:22:02.510048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.204 qpair failed and we were unable to recover it. 00:26:56.204 [2024-07-13 06:22:02.519859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.204 [2024-07-13 06:22:02.519987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.204 [2024-07-13 06:22:02.520014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.204 [2024-07-13 06:22:02.520030] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.204 [2024-07-13 06:22:02.520045] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.204 [2024-07-13 06:22:02.520075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.204 qpair failed and we were unable to recover it. 00:26:56.204 [2024-07-13 06:22:02.529906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.204 [2024-07-13 06:22:02.530047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.204 [2024-07-13 06:22:02.530073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.204 [2024-07-13 06:22:02.530088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.204 [2024-07-13 06:22:02.530102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.204 [2024-07-13 06:22:02.530134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.204 qpair failed and we were unable to recover it. 00:26:56.204 [2024-07-13 06:22:02.539918] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.204 [2024-07-13 06:22:02.540048] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.204 [2024-07-13 06:22:02.540074] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.204 [2024-07-13 06:22:02.540090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.204 [2024-07-13 06:22:02.540104] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.204 [2024-07-13 06:22:02.540140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.204 qpair failed and we were unable to recover it. 00:26:56.204 [2024-07-13 06:22:02.549934] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.204 [2024-07-13 06:22:02.550057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.204 [2024-07-13 06:22:02.550082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.204 [2024-07-13 06:22:02.550097] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.204 [2024-07-13 06:22:02.550112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.204 [2024-07-13 06:22:02.550153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.204 qpair failed and we were unable to recover it. 00:26:56.204 [2024-07-13 06:22:02.560001] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.204 [2024-07-13 06:22:02.560131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.204 [2024-07-13 06:22:02.560161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.204 [2024-07-13 06:22:02.560177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.204 [2024-07-13 06:22:02.560191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.204 [2024-07-13 06:22:02.560221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.204 qpair failed and we were unable to recover it. 00:26:56.204 [2024-07-13 06:22:02.570019] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.204 [2024-07-13 06:22:02.570162] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.204 [2024-07-13 06:22:02.570188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.204 [2024-07-13 06:22:02.570203] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.204 [2024-07-13 06:22:02.570218] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.204 [2024-07-13 06:22:02.570259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.204 qpair failed and we were unable to recover it. 00:26:56.204 [2024-07-13 06:22:02.580011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.204 [2024-07-13 06:22:02.580134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.204 [2024-07-13 06:22:02.580163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.204 [2024-07-13 06:22:02.580178] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.204 [2024-07-13 06:22:02.580193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.204 [2024-07-13 06:22:02.580238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.204 qpair failed and we were unable to recover it. 00:26:56.204 [2024-07-13 06:22:02.590121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.204 [2024-07-13 06:22:02.590298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.204 [2024-07-13 06:22:02.590330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.204 [2024-07-13 06:22:02.590346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.204 [2024-07-13 06:22:02.590360] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.204 [2024-07-13 06:22:02.590390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.204 qpair failed and we were unable to recover it. 00:26:56.204 [2024-07-13 06:22:02.600056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.204 [2024-07-13 06:22:02.600210] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.204 [2024-07-13 06:22:02.600236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.204 [2024-07-13 06:22:02.600251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.204 [2024-07-13 06:22:02.600265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.204 [2024-07-13 06:22:02.600295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.204 qpair failed and we were unable to recover it. 00:26:56.204 [2024-07-13 06:22:02.610140] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.205 [2024-07-13 06:22:02.610295] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.205 [2024-07-13 06:22:02.610321] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.205 [2024-07-13 06:22:02.610336] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.205 [2024-07-13 06:22:02.610350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.205 [2024-07-13 06:22:02.610380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.205 qpair failed and we were unable to recover it. 00:26:56.205 [2024-07-13 06:22:02.620173] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.205 [2024-07-13 06:22:02.620307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.205 [2024-07-13 06:22:02.620334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.205 [2024-07-13 06:22:02.620355] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.205 [2024-07-13 06:22:02.620369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.205 [2024-07-13 06:22:02.620415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.205 qpair failed and we were unable to recover it. 00:26:56.205 [2024-07-13 06:22:02.630209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.205 [2024-07-13 06:22:02.630343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.205 [2024-07-13 06:22:02.630370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.205 [2024-07-13 06:22:02.630385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.205 [2024-07-13 06:22:02.630400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.205 [2024-07-13 06:22:02.630454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.205 qpair failed and we were unable to recover it. 00:26:56.205 [2024-07-13 06:22:02.640227] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.205 [2024-07-13 06:22:02.640359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.205 [2024-07-13 06:22:02.640385] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.205 [2024-07-13 06:22:02.640401] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.205 [2024-07-13 06:22:02.640415] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.205 [2024-07-13 06:22:02.640445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.205 qpair failed and we were unable to recover it. 00:26:56.205 [2024-07-13 06:22:02.650214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.205 [2024-07-13 06:22:02.650345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.205 [2024-07-13 06:22:02.650371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.205 [2024-07-13 06:22:02.650387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.205 [2024-07-13 06:22:02.650401] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.205 [2024-07-13 06:22:02.650431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.205 qpair failed and we were unable to recover it. 00:26:56.205 [2024-07-13 06:22:02.660265] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.205 [2024-07-13 06:22:02.660424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.205 [2024-07-13 06:22:02.660451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.205 [2024-07-13 06:22:02.660466] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.205 [2024-07-13 06:22:02.660480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.205 [2024-07-13 06:22:02.660511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.205 qpair failed and we were unable to recover it. 00:26:56.205 [2024-07-13 06:22:02.670405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.205 [2024-07-13 06:22:02.670541] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.205 [2024-07-13 06:22:02.670569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.205 [2024-07-13 06:22:02.670590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.205 [2024-07-13 06:22:02.670605] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.205 [2024-07-13 06:22:02.670652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.205 qpair failed and we were unable to recover it. 00:26:56.205 [2024-07-13 06:22:02.680342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.205 [2024-07-13 06:22:02.680471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.205 [2024-07-13 06:22:02.680503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.205 [2024-07-13 06:22:02.680519] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.205 [2024-07-13 06:22:02.680534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.205 [2024-07-13 06:22:02.680564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.205 qpair failed and we were unable to recover it. 00:26:56.205 [2024-07-13 06:22:02.690386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.205 [2024-07-13 06:22:02.690541] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.205 [2024-07-13 06:22:02.690568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.205 [2024-07-13 06:22:02.690584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.205 [2024-07-13 06:22:02.690598] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.205 [2024-07-13 06:22:02.690644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.205 qpair failed and we were unable to recover it. 00:26:56.205 [2024-07-13 06:22:02.700369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.205 [2024-07-13 06:22:02.700496] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.205 [2024-07-13 06:22:02.700522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.205 [2024-07-13 06:22:02.700538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.205 [2024-07-13 06:22:02.700552] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.205 [2024-07-13 06:22:02.700583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.205 qpair failed and we were unable to recover it. 00:26:56.205 [2024-07-13 06:22:02.710420] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.205 [2024-07-13 06:22:02.710567] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.205 [2024-07-13 06:22:02.710596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.205 [2024-07-13 06:22:02.710613] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.205 [2024-07-13 06:22:02.710641] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.205 [2024-07-13 06:22:02.710684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.205 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-13 06:22:02.720459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.464 [2024-07-13 06:22:02.720589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.464 [2024-07-13 06:22:02.720615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.464 [2024-07-13 06:22:02.720631] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.464 [2024-07-13 06:22:02.720651] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.464 [2024-07-13 06:22:02.720682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-13 06:22:02.730562] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.464 [2024-07-13 06:22:02.730717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.464 [2024-07-13 06:22:02.730743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.464 [2024-07-13 06:22:02.730759] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.464 [2024-07-13 06:22:02.730773] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.464 [2024-07-13 06:22:02.730818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-13 06:22:02.740490] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.464 [2024-07-13 06:22:02.740662] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.464 [2024-07-13 06:22:02.740688] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.464 [2024-07-13 06:22:02.740704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.464 [2024-07-13 06:22:02.740718] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.464 [2024-07-13 06:22:02.740748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-13 06:22:02.750540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.464 [2024-07-13 06:22:02.750670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.464 [2024-07-13 06:22:02.750696] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.464 [2024-07-13 06:22:02.750712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.464 [2024-07-13 06:22:02.750726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.464 [2024-07-13 06:22:02.750757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-13 06:22:02.760635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.464 [2024-07-13 06:22:02.760761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.464 [2024-07-13 06:22:02.760787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.464 [2024-07-13 06:22:02.760802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.464 [2024-07-13 06:22:02.760817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.464 [2024-07-13 06:22:02.760847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-13 06:22:02.770585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.464 [2024-07-13 06:22:02.770728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.464 [2024-07-13 06:22:02.770753] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.464 [2024-07-13 06:22:02.770769] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.464 [2024-07-13 06:22:02.770784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.464 [2024-07-13 06:22:02.770814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-13 06:22:02.780628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.464 [2024-07-13 06:22:02.780752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.464 [2024-07-13 06:22:02.780778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.464 [2024-07-13 06:22:02.780793] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.464 [2024-07-13 06:22:02.780808] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.464 [2024-07-13 06:22:02.780838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-13 06:22:02.790646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.464 [2024-07-13 06:22:02.790773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.464 [2024-07-13 06:22:02.790800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.464 [2024-07-13 06:22:02.790815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.464 [2024-07-13 06:22:02.790833] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.464 [2024-07-13 06:22:02.790863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-13 06:22:02.800752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.464 [2024-07-13 06:22:02.800901] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.464 [2024-07-13 06:22:02.800930] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.464 [2024-07-13 06:22:02.800946] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.464 [2024-07-13 06:22:02.800960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.464 [2024-07-13 06:22:02.800990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-13 06:22:02.810739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.464 [2024-07-13 06:22:02.810916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.464 [2024-07-13 06:22:02.810942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.464 [2024-07-13 06:22:02.810957] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.464 [2024-07-13 06:22:02.810977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.464 [2024-07-13 06:22:02.811009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.464 qpair failed and we were unable to recover it. 00:26:56.464 [2024-07-13 06:22:02.820780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.465 [2024-07-13 06:22:02.820936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.465 [2024-07-13 06:22:02.820962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.465 [2024-07-13 06:22:02.820978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.465 [2024-07-13 06:22:02.820992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.465 [2024-07-13 06:22:02.821023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-13 06:22:02.830746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.465 [2024-07-13 06:22:02.830895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.465 [2024-07-13 06:22:02.830921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.465 [2024-07-13 06:22:02.830937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.465 [2024-07-13 06:22:02.830951] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.465 [2024-07-13 06:22:02.830983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-13 06:22:02.840796] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.465 [2024-07-13 06:22:02.840920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.465 [2024-07-13 06:22:02.840946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.465 [2024-07-13 06:22:02.840961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.465 [2024-07-13 06:22:02.840975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.465 [2024-07-13 06:22:02.841006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-13 06:22:02.850848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.465 [2024-07-13 06:22:02.851023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.465 [2024-07-13 06:22:02.851049] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.465 [2024-07-13 06:22:02.851065] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.465 [2024-07-13 06:22:02.851079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.465 [2024-07-13 06:22:02.851110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-13 06:22:02.860850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.465 [2024-07-13 06:22:02.860986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.465 [2024-07-13 06:22:02.861013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.465 [2024-07-13 06:22:02.861028] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.465 [2024-07-13 06:22:02.861042] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.465 [2024-07-13 06:22:02.861073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-13 06:22:02.870850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.465 [2024-07-13 06:22:02.870982] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.465 [2024-07-13 06:22:02.871008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.465 [2024-07-13 06:22:02.871024] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.465 [2024-07-13 06:22:02.871038] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.465 [2024-07-13 06:22:02.871069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-13 06:22:02.880899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.465 [2024-07-13 06:22:02.881026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.465 [2024-07-13 06:22:02.881052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.465 [2024-07-13 06:22:02.881068] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.465 [2024-07-13 06:22:02.881082] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.465 [2024-07-13 06:22:02.881112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-13 06:22:02.890936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.465 [2024-07-13 06:22:02.891068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.465 [2024-07-13 06:22:02.891094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.465 [2024-07-13 06:22:02.891109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.465 [2024-07-13 06:22:02.891124] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.465 [2024-07-13 06:22:02.891155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-13 06:22:02.900949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.465 [2024-07-13 06:22:02.901121] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.465 [2024-07-13 06:22:02.901147] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.465 [2024-07-13 06:22:02.901168] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.465 [2024-07-13 06:22:02.901183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.465 [2024-07-13 06:22:02.901214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-13 06:22:02.910990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.465 [2024-07-13 06:22:02.911112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.465 [2024-07-13 06:22:02.911138] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.465 [2024-07-13 06:22:02.911154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.465 [2024-07-13 06:22:02.911168] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.465 [2024-07-13 06:22:02.911198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-13 06:22:02.920990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.465 [2024-07-13 06:22:02.921112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.465 [2024-07-13 06:22:02.921137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.465 [2024-07-13 06:22:02.921152] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.465 [2024-07-13 06:22:02.921167] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.465 [2024-07-13 06:22:02.921197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-13 06:22:02.931094] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.465 [2024-07-13 06:22:02.931237] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.465 [2024-07-13 06:22:02.931263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.465 [2024-07-13 06:22:02.931278] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.465 [2024-07-13 06:22:02.931293] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.465 [2024-07-13 06:22:02.931323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-13 06:22:02.941079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.465 [2024-07-13 06:22:02.941250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.465 [2024-07-13 06:22:02.941276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.465 [2024-07-13 06:22:02.941292] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.465 [2024-07-13 06:22:02.941307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.465 [2024-07-13 06:22:02.941337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.465 qpair failed and we were unable to recover it. 00:26:56.465 [2024-07-13 06:22:02.951088] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.465 [2024-07-13 06:22:02.951225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.466 [2024-07-13 06:22:02.951250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.466 [2024-07-13 06:22:02.951266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.466 [2024-07-13 06:22:02.951281] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.466 [2024-07-13 06:22:02.951311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-13 06:22:02.961183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.466 [2024-07-13 06:22:02.961350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.466 [2024-07-13 06:22:02.961376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.466 [2024-07-13 06:22:02.961392] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.466 [2024-07-13 06:22:02.961406] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.466 [2024-07-13 06:22:02.961436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.466 [2024-07-13 06:22:02.971171] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.466 [2024-07-13 06:22:02.971296] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.466 [2024-07-13 06:22:02.971322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.466 [2024-07-13 06:22:02.971338] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.466 [2024-07-13 06:22:02.971352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.466 [2024-07-13 06:22:02.971396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.466 qpair failed and we were unable to recover it. 00:26:56.726 [2024-07-13 06:22:02.981280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.726 [2024-07-13 06:22:02.981411] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.726 [2024-07-13 06:22:02.981437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.726 [2024-07-13 06:22:02.981453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.726 [2024-07-13 06:22:02.981470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.726 [2024-07-13 06:22:02.981501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.726 qpair failed and we were unable to recover it. 00:26:56.726 [2024-07-13 06:22:02.991335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.726 [2024-07-13 06:22:02.991461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.726 [2024-07-13 06:22:02.991492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.726 [2024-07-13 06:22:02.991508] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.726 [2024-07-13 06:22:02.991523] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.726 [2024-07-13 06:22:02.991553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.726 qpair failed and we were unable to recover it. 00:26:56.726 [2024-07-13 06:22:03.001243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.726 [2024-07-13 06:22:03.001370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.726 [2024-07-13 06:22:03.001396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.726 [2024-07-13 06:22:03.001412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.726 [2024-07-13 06:22:03.001426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.726 [2024-07-13 06:22:03.001456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.726 qpair failed and we were unable to recover it. 00:26:56.726 [2024-07-13 06:22:03.011283] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.726 [2024-07-13 06:22:03.011402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.726 [2024-07-13 06:22:03.011428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.726 [2024-07-13 06:22:03.011443] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.726 [2024-07-13 06:22:03.011457] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.726 [2024-07-13 06:22:03.011487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.726 qpair failed and we were unable to recover it. 00:26:56.726 [2024-07-13 06:22:03.021338] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.726 [2024-07-13 06:22:03.021464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.726 [2024-07-13 06:22:03.021491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.726 [2024-07-13 06:22:03.021506] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.726 [2024-07-13 06:22:03.021521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.726 [2024-07-13 06:22:03.021563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.726 qpair failed and we were unable to recover it. 00:26:56.726 [2024-07-13 06:22:03.031391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.726 [2024-07-13 06:22:03.031521] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.726 [2024-07-13 06:22:03.031548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.726 [2024-07-13 06:22:03.031564] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.726 [2024-07-13 06:22:03.031578] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.726 [2024-07-13 06:22:03.031614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.726 qpair failed and we were unable to recover it. 00:26:56.726 [2024-07-13 06:22:03.041335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.726 [2024-07-13 06:22:03.041484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.726 [2024-07-13 06:22:03.041510] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.726 [2024-07-13 06:22:03.041526] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.726 [2024-07-13 06:22:03.041541] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.726 [2024-07-13 06:22:03.041572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.726 qpair failed and we were unable to recover it. 00:26:56.726 [2024-07-13 06:22:03.051374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.726 [2024-07-13 06:22:03.051511] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.726 [2024-07-13 06:22:03.051537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.726 [2024-07-13 06:22:03.051552] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.726 [2024-07-13 06:22:03.051567] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.726 [2024-07-13 06:22:03.051597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.726 qpair failed and we were unable to recover it. 00:26:56.726 [2024-07-13 06:22:03.061404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.726 [2024-07-13 06:22:03.061526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.726 [2024-07-13 06:22:03.061553] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.726 [2024-07-13 06:22:03.061569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.726 [2024-07-13 06:22:03.061583] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.726 [2024-07-13 06:22:03.061613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.726 qpair failed and we were unable to recover it. 00:26:56.726 [2024-07-13 06:22:03.071459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.726 [2024-07-13 06:22:03.071576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.726 [2024-07-13 06:22:03.071602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.727 [2024-07-13 06:22:03.071617] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.727 [2024-07-13 06:22:03.071632] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.727 [2024-07-13 06:22:03.071662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.727 qpair failed and we were unable to recover it. 00:26:56.727 [2024-07-13 06:22:03.081476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.727 [2024-07-13 06:22:03.081595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.727 [2024-07-13 06:22:03.081626] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.727 [2024-07-13 06:22:03.081642] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.727 [2024-07-13 06:22:03.081657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.727 [2024-07-13 06:22:03.081687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.727 qpair failed and we were unable to recover it. 00:26:56.727 [2024-07-13 06:22:03.091501] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.727 [2024-07-13 06:22:03.091655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.727 [2024-07-13 06:22:03.091681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.727 [2024-07-13 06:22:03.091697] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.727 [2024-07-13 06:22:03.091711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.727 [2024-07-13 06:22:03.091742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.727 qpair failed and we were unable to recover it. 00:26:56.727 [2024-07-13 06:22:03.101519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.727 [2024-07-13 06:22:03.101651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.727 [2024-07-13 06:22:03.101679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.727 [2024-07-13 06:22:03.101694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.727 [2024-07-13 06:22:03.101712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.727 [2024-07-13 06:22:03.101743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.727 qpair failed and we were unable to recover it. 00:26:56.727 [2024-07-13 06:22:03.111553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.727 [2024-07-13 06:22:03.111699] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.727 [2024-07-13 06:22:03.111727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.727 [2024-07-13 06:22:03.111743] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.727 [2024-07-13 06:22:03.111762] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.727 [2024-07-13 06:22:03.111794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.727 qpair failed and we were unable to recover it. 00:26:56.727 [2024-07-13 06:22:03.121588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.727 [2024-07-13 06:22:03.121716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.727 [2024-07-13 06:22:03.121743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.727 [2024-07-13 06:22:03.121758] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.727 [2024-07-13 06:22:03.121773] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.727 [2024-07-13 06:22:03.121810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.727 qpair failed and we were unable to recover it. 00:26:56.727 [2024-07-13 06:22:03.131656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.727 [2024-07-13 06:22:03.131786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.727 [2024-07-13 06:22:03.131812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.727 [2024-07-13 06:22:03.131828] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.727 [2024-07-13 06:22:03.131842] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.727 [2024-07-13 06:22:03.131890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.727 qpair failed and we were unable to recover it. 00:26:56.727 [2024-07-13 06:22:03.141663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.727 [2024-07-13 06:22:03.141786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.727 [2024-07-13 06:22:03.141812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.727 [2024-07-13 06:22:03.141827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.727 [2024-07-13 06:22:03.141842] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.727 [2024-07-13 06:22:03.141880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.727 qpair failed and we were unable to recover it. 00:26:56.727 [2024-07-13 06:22:03.151664] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.727 [2024-07-13 06:22:03.151795] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.727 [2024-07-13 06:22:03.151822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.727 [2024-07-13 06:22:03.151841] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.727 [2024-07-13 06:22:03.151857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.727 [2024-07-13 06:22:03.151894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.727 qpair failed and we were unable to recover it. 00:26:56.727 [2024-07-13 06:22:03.161719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.727 [2024-07-13 06:22:03.161885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.727 [2024-07-13 06:22:03.161911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.727 [2024-07-13 06:22:03.161927] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.727 [2024-07-13 06:22:03.161942] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.727 [2024-07-13 06:22:03.161973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.727 qpair failed and we were unable to recover it. 00:26:56.727 [2024-07-13 06:22:03.171725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.727 [2024-07-13 06:22:03.171855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.727 [2024-07-13 06:22:03.171893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.727 [2024-07-13 06:22:03.171910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.727 [2024-07-13 06:22:03.171925] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.727 [2024-07-13 06:22:03.171956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.727 qpair failed and we were unable to recover it. 00:26:56.727 [2024-07-13 06:22:03.181788] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.727 [2024-07-13 06:22:03.181945] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.727 [2024-07-13 06:22:03.181972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.727 [2024-07-13 06:22:03.181987] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.727 [2024-07-13 06:22:03.182001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.727 [2024-07-13 06:22:03.182031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.727 qpair failed and we were unable to recover it. 00:26:56.727 [2024-07-13 06:22:03.191783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.727 [2024-07-13 06:22:03.191930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.727 [2024-07-13 06:22:03.191956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.727 [2024-07-13 06:22:03.191972] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.727 [2024-07-13 06:22:03.191985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.727 [2024-07-13 06:22:03.192017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.727 qpair failed and we were unable to recover it. 00:26:56.727 [2024-07-13 06:22:03.201817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.727 [2024-07-13 06:22:03.201942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.727 [2024-07-13 06:22:03.201969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.727 [2024-07-13 06:22:03.201985] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.727 [2024-07-13 06:22:03.202000] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.727 [2024-07-13 06:22:03.202030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.727 qpair failed and we were unable to recover it. 00:26:56.727 [2024-07-13 06:22:03.211850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.727 [2024-07-13 06:22:03.211990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.727 [2024-07-13 06:22:03.212017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.727 [2024-07-13 06:22:03.212034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.727 [2024-07-13 06:22:03.212054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.727 [2024-07-13 06:22:03.212086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.727 qpair failed and we were unable to recover it. 00:26:56.728 [2024-07-13 06:22:03.221886] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.728 [2024-07-13 06:22:03.222007] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.728 [2024-07-13 06:22:03.222032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.728 [2024-07-13 06:22:03.222047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.728 [2024-07-13 06:22:03.222061] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.728 [2024-07-13 06:22:03.222091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.728 qpair failed and we were unable to recover it. 00:26:56.728 [2024-07-13 06:22:03.231945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.728 [2024-07-13 06:22:03.232065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.728 [2024-07-13 06:22:03.232090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.728 [2024-07-13 06:22:03.232106] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.728 [2024-07-13 06:22:03.232119] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.728 [2024-07-13 06:22:03.232150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.728 qpair failed and we were unable to recover it. 00:26:56.987 [2024-07-13 06:22:03.241938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.987 [2024-07-13 06:22:03.242068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.987 [2024-07-13 06:22:03.242095] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.987 [2024-07-13 06:22:03.242111] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.988 [2024-07-13 06:22:03.242125] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.988 [2024-07-13 06:22:03.242170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-13 06:22:03.252017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.988 [2024-07-13 06:22:03.252149] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.988 [2024-07-13 06:22:03.252176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.988 [2024-07-13 06:22:03.252193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.988 [2024-07-13 06:22:03.252207] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.988 [2024-07-13 06:22:03.252249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-13 06:22:03.262099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.988 [2024-07-13 06:22:03.262244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.988 [2024-07-13 06:22:03.262270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.988 [2024-07-13 06:22:03.262286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.988 [2024-07-13 06:22:03.262300] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.988 [2024-07-13 06:22:03.262345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-13 06:22:03.272037] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.988 [2024-07-13 06:22:03.272160] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.988 [2024-07-13 06:22:03.272185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.988 [2024-07-13 06:22:03.272201] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.988 [2024-07-13 06:22:03.272215] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.988 [2024-07-13 06:22:03.272246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-13 06:22:03.282053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.988 [2024-07-13 06:22:03.282185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.988 [2024-07-13 06:22:03.282212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.988 [2024-07-13 06:22:03.282228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.988 [2024-07-13 06:22:03.282242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.988 [2024-07-13 06:22:03.282274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-13 06:22:03.292120] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.988 [2024-07-13 06:22:03.292247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.988 [2024-07-13 06:22:03.292273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.988 [2024-07-13 06:22:03.292289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.988 [2024-07-13 06:22:03.292303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.988 [2024-07-13 06:22:03.292335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-13 06:22:03.302129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.988 [2024-07-13 06:22:03.302253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.988 [2024-07-13 06:22:03.302290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.988 [2024-07-13 06:22:03.302306] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.988 [2024-07-13 06:22:03.302326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.988 [2024-07-13 06:22:03.302357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-13 06:22:03.312159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.988 [2024-07-13 06:22:03.312284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.988 [2024-07-13 06:22:03.312320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.988 [2024-07-13 06:22:03.312336] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.988 [2024-07-13 06:22:03.312350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.988 [2024-07-13 06:22:03.312392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-13 06:22:03.322162] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.988 [2024-07-13 06:22:03.322298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.988 [2024-07-13 06:22:03.322325] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.988 [2024-07-13 06:22:03.322341] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.988 [2024-07-13 06:22:03.322355] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.988 [2024-07-13 06:22:03.322387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-13 06:22:03.332214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.988 [2024-07-13 06:22:03.332337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.988 [2024-07-13 06:22:03.332366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.988 [2024-07-13 06:22:03.332382] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.988 [2024-07-13 06:22:03.332395] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.988 [2024-07-13 06:22:03.332426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-13 06:22:03.342245] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.988 [2024-07-13 06:22:03.342378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.988 [2024-07-13 06:22:03.342403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.988 [2024-07-13 06:22:03.342419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.988 [2024-07-13 06:22:03.342432] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.988 [2024-07-13 06:22:03.342462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-13 06:22:03.352250] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.988 [2024-07-13 06:22:03.352369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.988 [2024-07-13 06:22:03.352394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.988 [2024-07-13 06:22:03.352410] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.988 [2024-07-13 06:22:03.352424] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.988 [2024-07-13 06:22:03.352454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.988 [2024-07-13 06:22:03.362353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.988 [2024-07-13 06:22:03.362474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.988 [2024-07-13 06:22:03.362500] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.988 [2024-07-13 06:22:03.362516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.988 [2024-07-13 06:22:03.362530] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.988 [2024-07-13 06:22:03.362562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.988 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-13 06:22:03.372328] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.989 [2024-07-13 06:22:03.372459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.989 [2024-07-13 06:22:03.372486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.989 [2024-07-13 06:22:03.372501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.989 [2024-07-13 06:22:03.372515] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.989 [2024-07-13 06:22:03.372545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-13 06:22:03.382372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.989 [2024-07-13 06:22:03.382499] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.989 [2024-07-13 06:22:03.382526] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.989 [2024-07-13 06:22:03.382542] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.989 [2024-07-13 06:22:03.382556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.989 [2024-07-13 06:22:03.382587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-13 06:22:03.392367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.989 [2024-07-13 06:22:03.392485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.989 [2024-07-13 06:22:03.392521] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.989 [2024-07-13 06:22:03.392545] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.989 [2024-07-13 06:22:03.392560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.989 [2024-07-13 06:22:03.392592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-13 06:22:03.402423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.989 [2024-07-13 06:22:03.402550] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.989 [2024-07-13 06:22:03.402577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.989 [2024-07-13 06:22:03.402594] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.989 [2024-07-13 06:22:03.402607] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.989 [2024-07-13 06:22:03.402638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-13 06:22:03.412481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.989 [2024-07-13 06:22:03.412605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.989 [2024-07-13 06:22:03.412631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.989 [2024-07-13 06:22:03.412647] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.989 [2024-07-13 06:22:03.412661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.989 [2024-07-13 06:22:03.412692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-13 06:22:03.422470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.989 [2024-07-13 06:22:03.422596] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.989 [2024-07-13 06:22:03.422622] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.989 [2024-07-13 06:22:03.422638] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.989 [2024-07-13 06:22:03.422652] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.989 [2024-07-13 06:22:03.422683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-13 06:22:03.432630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.989 [2024-07-13 06:22:03.432762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.989 [2024-07-13 06:22:03.432787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.989 [2024-07-13 06:22:03.432802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.989 [2024-07-13 06:22:03.432816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.989 [2024-07-13 06:22:03.432847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-13 06:22:03.442534] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.989 [2024-07-13 06:22:03.442694] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.989 [2024-07-13 06:22:03.442721] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.989 [2024-07-13 06:22:03.442737] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.989 [2024-07-13 06:22:03.442750] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.989 [2024-07-13 06:22:03.442780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-13 06:22:03.452564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.989 [2024-07-13 06:22:03.452690] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.989 [2024-07-13 06:22:03.452716] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.989 [2024-07-13 06:22:03.452731] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.989 [2024-07-13 06:22:03.452745] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.989 [2024-07-13 06:22:03.452775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-13 06:22:03.462588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.989 [2024-07-13 06:22:03.462750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.989 [2024-07-13 06:22:03.462778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.989 [2024-07-13 06:22:03.462794] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.989 [2024-07-13 06:22:03.462807] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:26:56.989 [2024-07-13 06:22:03.462852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-13 06:22:03.472630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.989 [2024-07-13 06:22:03.472783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.989 [2024-07-13 06:22:03.472816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.989 [2024-07-13 06:22:03.472833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.989 [2024-07-13 06:22:03.472847] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:56.989 [2024-07-13 06:22:03.472887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-13 06:22:03.482655] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.989 [2024-07-13 06:22:03.482777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.989 [2024-07-13 06:22:03.482805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.989 [2024-07-13 06:22:03.482826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.989 [2024-07-13 06:22:03.482840] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:56.989 [2024-07-13 06:22:03.482876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.989 qpair failed and we were unable to recover it. 00:26:56.989 [2024-07-13 06:22:03.492726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:56.989 [2024-07-13 06:22:03.492857] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:56.989 [2024-07-13 06:22:03.492901] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:56.989 [2024-07-13 06:22:03.492936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:56.989 [2024-07-13 06:22:03.492955] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:56.990 [2024-07-13 06:22:03.492986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:56.990 qpair failed and we were unable to recover it. 00:26:57.249 [2024-07-13 06:22:03.502781] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.249 [2024-07-13 06:22:03.502926] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.249 [2024-07-13 06:22:03.502960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.249 [2024-07-13 06:22:03.502978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.249 [2024-07-13 06:22:03.502992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.249 [2024-07-13 06:22:03.503023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.249 qpair failed and we were unable to recover it. 00:26:57.249 [2024-07-13 06:22:03.512729] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.249 [2024-07-13 06:22:03.512898] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.249 [2024-07-13 06:22:03.512926] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.249 [2024-07-13 06:22:03.512942] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.249 [2024-07-13 06:22:03.512956] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.249 [2024-07-13 06:22:03.512986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.249 qpair failed and we were unable to recover it. 00:26:57.249 [2024-07-13 06:22:03.522732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.249 [2024-07-13 06:22:03.522846] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.249 [2024-07-13 06:22:03.522877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.250 [2024-07-13 06:22:03.522894] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.250 [2024-07-13 06:22:03.522907] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.250 [2024-07-13 06:22:03.522937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.250 qpair failed and we were unable to recover it. 00:26:57.250 [2024-07-13 06:22:03.532794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.250 [2024-07-13 06:22:03.532934] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.250 [2024-07-13 06:22:03.532961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.250 [2024-07-13 06:22:03.532982] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.250 [2024-07-13 06:22:03.532995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.250 [2024-07-13 06:22:03.533025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.250 qpair failed and we were unable to recover it. 00:26:57.250 [2024-07-13 06:22:03.542813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.250 [2024-07-13 06:22:03.542983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.250 [2024-07-13 06:22:03.543014] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.250 [2024-07-13 06:22:03.543030] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.250 [2024-07-13 06:22:03.543043] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.250 [2024-07-13 06:22:03.543072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.250 qpair failed and we were unable to recover it. 00:26:57.250 [2024-07-13 06:22:03.552831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.250 [2024-07-13 06:22:03.552957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.250 [2024-07-13 06:22:03.552985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.250 [2024-07-13 06:22:03.553001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.250 [2024-07-13 06:22:03.553015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.250 [2024-07-13 06:22:03.553045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.250 qpair failed and we were unable to recover it. 00:26:57.250 [2024-07-13 06:22:03.562875] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.250 [2024-07-13 06:22:03.563011] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.250 [2024-07-13 06:22:03.563038] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.250 [2024-07-13 06:22:03.563053] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.250 [2024-07-13 06:22:03.563067] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.250 [2024-07-13 06:22:03.563096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.250 qpair failed and we were unable to recover it. 00:26:57.250 [2024-07-13 06:22:03.572923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.250 [2024-07-13 06:22:03.573059] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.250 [2024-07-13 06:22:03.573090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.250 [2024-07-13 06:22:03.573107] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.250 [2024-07-13 06:22:03.573121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.250 [2024-07-13 06:22:03.573150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.250 qpair failed and we were unable to recover it. 00:26:57.250 [2024-07-13 06:22:03.582922] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.250 [2024-07-13 06:22:03.583057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.250 [2024-07-13 06:22:03.583083] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.250 [2024-07-13 06:22:03.583098] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.250 [2024-07-13 06:22:03.583111] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.250 [2024-07-13 06:22:03.583142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.250 qpair failed and we were unable to recover it. 00:26:57.250 [2024-07-13 06:22:03.592959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.250 [2024-07-13 06:22:03.593080] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.250 [2024-07-13 06:22:03.593107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.250 [2024-07-13 06:22:03.593123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.250 [2024-07-13 06:22:03.593138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.250 [2024-07-13 06:22:03.593168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.250 qpair failed and we were unable to recover it. 00:26:57.250 [2024-07-13 06:22:03.603103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.250 [2024-07-13 06:22:03.603266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.250 [2024-07-13 06:22:03.603291] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.250 [2024-07-13 06:22:03.603306] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.250 [2024-07-13 06:22:03.603318] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.250 [2024-07-13 06:22:03.603348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.250 qpair failed and we were unable to recover it. 00:26:57.250 [2024-07-13 06:22:03.613076] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.250 [2024-07-13 06:22:03.613246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.250 [2024-07-13 06:22:03.613272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.250 [2024-07-13 06:22:03.613288] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.250 [2024-07-13 06:22:03.613301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.250 [2024-07-13 06:22:03.613344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.250 qpair failed and we were unable to recover it. 00:26:57.250 [2024-07-13 06:22:03.623029] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.250 [2024-07-13 06:22:03.623148] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.250 [2024-07-13 06:22:03.623172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.250 [2024-07-13 06:22:03.623188] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.250 [2024-07-13 06:22:03.623201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.250 [2024-07-13 06:22:03.623230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.250 qpair failed and we were unable to recover it. 00:26:57.250 [2024-07-13 06:22:03.633100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.250 [2024-07-13 06:22:03.633249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.250 [2024-07-13 06:22:03.633276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.250 [2024-07-13 06:22:03.633291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.250 [2024-07-13 06:22:03.633304] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.250 [2024-07-13 06:22:03.633348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.250 qpair failed and we were unable to recover it. 00:26:57.250 [2024-07-13 06:22:03.643074] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.250 [2024-07-13 06:22:03.643190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.250 [2024-07-13 06:22:03.643216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.250 [2024-07-13 06:22:03.643231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.251 [2024-07-13 06:22:03.643245] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.251 [2024-07-13 06:22:03.643274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.251 qpair failed and we were unable to recover it. 00:26:57.251 [2024-07-13 06:22:03.653231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.251 [2024-07-13 06:22:03.653354] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.251 [2024-07-13 06:22:03.653380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.251 [2024-07-13 06:22:03.653395] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.251 [2024-07-13 06:22:03.653409] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.251 [2024-07-13 06:22:03.653454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.251 qpair failed and we were unable to recover it. 00:26:57.251 [2024-07-13 06:22:03.663137] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.251 [2024-07-13 06:22:03.663264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.251 [2024-07-13 06:22:03.663295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.251 [2024-07-13 06:22:03.663312] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.251 [2024-07-13 06:22:03.663327] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.251 [2024-07-13 06:22:03.663356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.251 qpair failed and we were unable to recover it. 00:26:57.251 [2024-07-13 06:22:03.673179] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.251 [2024-07-13 06:22:03.673301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.251 [2024-07-13 06:22:03.673327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.251 [2024-07-13 06:22:03.673342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.251 [2024-07-13 06:22:03.673357] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.251 [2024-07-13 06:22:03.673385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.251 qpair failed and we were unable to recover it. 00:26:57.251 [2024-07-13 06:22:03.683204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.251 [2024-07-13 06:22:03.683327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.251 [2024-07-13 06:22:03.683355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.251 [2024-07-13 06:22:03.683371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.251 [2024-07-13 06:22:03.683384] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.251 [2024-07-13 06:22:03.683414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.251 qpair failed and we were unable to recover it. 00:26:57.251 [2024-07-13 06:22:03.693245] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.251 [2024-07-13 06:22:03.693372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.251 [2024-07-13 06:22:03.693398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.251 [2024-07-13 06:22:03.693414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.251 [2024-07-13 06:22:03.693428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.251 [2024-07-13 06:22:03.693456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.251 qpair failed and we were unable to recover it. 00:26:57.251 [2024-07-13 06:22:03.703257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.251 [2024-07-13 06:22:03.703379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.251 [2024-07-13 06:22:03.703405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.251 [2024-07-13 06:22:03.703420] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.251 [2024-07-13 06:22:03.703434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.251 [2024-07-13 06:22:03.703468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.251 qpair failed and we were unable to recover it. 00:26:57.251 [2024-07-13 06:22:03.713312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.251 [2024-07-13 06:22:03.713427] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.251 [2024-07-13 06:22:03.713452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.251 [2024-07-13 06:22:03.713467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.251 [2024-07-13 06:22:03.713481] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.251 [2024-07-13 06:22:03.713511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.251 qpair failed and we were unable to recover it. 00:26:57.251 [2024-07-13 06:22:03.723340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.251 [2024-07-13 06:22:03.723452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.251 [2024-07-13 06:22:03.723476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.251 [2024-07-13 06:22:03.723492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.251 [2024-07-13 06:22:03.723505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.251 [2024-07-13 06:22:03.723535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.251 qpair failed and we were unable to recover it. 00:26:57.251 [2024-07-13 06:22:03.733384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.251 [2024-07-13 06:22:03.733510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.251 [2024-07-13 06:22:03.733537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.251 [2024-07-13 06:22:03.733552] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.251 [2024-07-13 06:22:03.733566] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.251 [2024-07-13 06:22:03.733594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.251 qpair failed and we were unable to recover it. 00:26:57.251 [2024-07-13 06:22:03.743381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.251 [2024-07-13 06:22:03.743543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.251 [2024-07-13 06:22:03.743570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.251 [2024-07-13 06:22:03.743585] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.251 [2024-07-13 06:22:03.743599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.251 [2024-07-13 06:22:03.743628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.251 qpair failed and we were unable to recover it. 00:26:57.251 [2024-07-13 06:22:03.753397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.251 [2024-07-13 06:22:03.753515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.251 [2024-07-13 06:22:03.753554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.251 [2024-07-13 06:22:03.753571] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.251 [2024-07-13 06:22:03.753584] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.251 [2024-07-13 06:22:03.753613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.251 qpair failed and we were unable to recover it. 00:26:57.510 [2024-07-13 06:22:03.763478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.510 [2024-07-13 06:22:03.763609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.510 [2024-07-13 06:22:03.763638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.510 [2024-07-13 06:22:03.763655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.510 [2024-07-13 06:22:03.763668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.510 [2024-07-13 06:22:03.763698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.510 qpair failed and we were unable to recover it. 00:26:57.510 [2024-07-13 06:22:03.773471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.510 [2024-07-13 06:22:03.773629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.510 [2024-07-13 06:22:03.773658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.510 [2024-07-13 06:22:03.773674] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.510 [2024-07-13 06:22:03.773688] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.510 [2024-07-13 06:22:03.773719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.510 qpair failed and we were unable to recover it. 00:26:57.510 [2024-07-13 06:22:03.783484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.510 [2024-07-13 06:22:03.783606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.510 [2024-07-13 06:22:03.783641] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.510 [2024-07-13 06:22:03.783657] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.510 [2024-07-13 06:22:03.783671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.510 [2024-07-13 06:22:03.783700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.510 qpair failed and we were unable to recover it. 00:26:57.510 [2024-07-13 06:22:03.793517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.510 [2024-07-13 06:22:03.793638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.510 [2024-07-13 06:22:03.793674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.510 [2024-07-13 06:22:03.793690] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.510 [2024-07-13 06:22:03.793703] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.510 [2024-07-13 06:22:03.793738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.510 qpair failed and we were unable to recover it. 00:26:57.510 [2024-07-13 06:22:03.803537] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.510 [2024-07-13 06:22:03.803654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.510 [2024-07-13 06:22:03.803679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.510 [2024-07-13 06:22:03.803694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.510 [2024-07-13 06:22:03.803707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.510 [2024-07-13 06:22:03.803736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.510 qpair failed and we were unable to recover it. 00:26:57.510 [2024-07-13 06:22:03.813615] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.510 [2024-07-13 06:22:03.813770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.510 [2024-07-13 06:22:03.813796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.510 [2024-07-13 06:22:03.813812] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.510 [2024-07-13 06:22:03.813826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.510 [2024-07-13 06:22:03.813854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.510 qpair failed and we were unable to recover it. 00:26:57.510 [2024-07-13 06:22:03.823623] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.510 [2024-07-13 06:22:03.823748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.510 [2024-07-13 06:22:03.823772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.510 [2024-07-13 06:22:03.823789] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.510 [2024-07-13 06:22:03.823802] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.510 [2024-07-13 06:22:03.823831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.510 qpair failed and we were unable to recover it. 00:26:57.510 [2024-07-13 06:22:03.833663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.510 [2024-07-13 06:22:03.833808] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.510 [2024-07-13 06:22:03.833835] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.510 [2024-07-13 06:22:03.833850] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.511 [2024-07-13 06:22:03.833864] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.511 [2024-07-13 06:22:03.833903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.511 qpair failed and we were unable to recover it. 00:26:57.511 [2024-07-13 06:22:03.843695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.511 [2024-07-13 06:22:03.843821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.511 [2024-07-13 06:22:03.843851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.511 [2024-07-13 06:22:03.843876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.511 [2024-07-13 06:22:03.843893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.511 [2024-07-13 06:22:03.843922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.511 qpair failed and we were unable to recover it. 00:26:57.511 [2024-07-13 06:22:03.853696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.511 [2024-07-13 06:22:03.853819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.511 [2024-07-13 06:22:03.853845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.511 [2024-07-13 06:22:03.853861] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.511 [2024-07-13 06:22:03.853882] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.511 [2024-07-13 06:22:03.853911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.511 qpair failed and we were unable to recover it. 00:26:57.511 [2024-07-13 06:22:03.863711] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.511 [2024-07-13 06:22:03.863832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.511 [2024-07-13 06:22:03.863864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.511 [2024-07-13 06:22:03.863890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.511 [2024-07-13 06:22:03.863903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.511 [2024-07-13 06:22:03.863932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.511 qpair failed and we were unable to recover it. 00:26:57.511 [2024-07-13 06:22:03.873738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.511 [2024-07-13 06:22:03.873861] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.511 [2024-07-13 06:22:03.873892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.511 [2024-07-13 06:22:03.873908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.511 [2024-07-13 06:22:03.873922] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.511 [2024-07-13 06:22:03.873951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.511 qpair failed and we were unable to recover it. 00:26:57.511 [2024-07-13 06:22:03.883828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.511 [2024-07-13 06:22:03.883956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.511 [2024-07-13 06:22:03.883981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.511 [2024-07-13 06:22:03.883995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.511 [2024-07-13 06:22:03.884008] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.511 [2024-07-13 06:22:03.884045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.511 qpair failed and we were unable to recover it. 00:26:57.511 [2024-07-13 06:22:03.893847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.511 [2024-07-13 06:22:03.894013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.511 [2024-07-13 06:22:03.894040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.511 [2024-07-13 06:22:03.894056] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.511 [2024-07-13 06:22:03.894070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.511 [2024-07-13 06:22:03.894099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.511 qpair failed and we were unable to recover it. 00:26:57.511 [2024-07-13 06:22:03.903879] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.511 [2024-07-13 06:22:03.904023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.511 [2024-07-13 06:22:03.904050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.511 [2024-07-13 06:22:03.904066] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.511 [2024-07-13 06:22:03.904079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.511 [2024-07-13 06:22:03.904108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.511 qpair failed and we were unable to recover it. 00:26:57.511 [2024-07-13 06:22:03.913913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.511 [2024-07-13 06:22:03.914071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.511 [2024-07-13 06:22:03.914097] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.511 [2024-07-13 06:22:03.914113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.511 [2024-07-13 06:22:03.914127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.511 [2024-07-13 06:22:03.914155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.511 qpair failed and we were unable to recover it. 00:26:57.511 [2024-07-13 06:22:03.923908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.511 [2024-07-13 06:22:03.924027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.511 [2024-07-13 06:22:03.924051] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.511 [2024-07-13 06:22:03.924067] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.511 [2024-07-13 06:22:03.924080] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.511 [2024-07-13 06:22:03.924111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.511 qpair failed and we were unable to recover it. 00:26:57.511 [2024-07-13 06:22:03.933925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.511 [2024-07-13 06:22:03.934058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.511 [2024-07-13 06:22:03.934089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.511 [2024-07-13 06:22:03.934105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.511 [2024-07-13 06:22:03.934119] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.511 [2024-07-13 06:22:03.934148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.511 qpair failed and we were unable to recover it. 00:26:57.511 [2024-07-13 06:22:03.943952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.511 [2024-07-13 06:22:03.944105] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.511 [2024-07-13 06:22:03.944132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.511 [2024-07-13 06:22:03.944148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.511 [2024-07-13 06:22:03.944162] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.511 [2024-07-13 06:22:03.944190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.511 qpair failed and we were unable to recover it. 00:26:57.511 [2024-07-13 06:22:03.953988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.511 [2024-07-13 06:22:03.954157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.511 [2024-07-13 06:22:03.954183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.511 [2024-07-13 06:22:03.954199] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.511 [2024-07-13 06:22:03.954212] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.511 [2024-07-13 06:22:03.954241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.511 qpair failed and we were unable to recover it. 00:26:57.511 [2024-07-13 06:22:03.964014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.511 [2024-07-13 06:22:03.964144] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.511 [2024-07-13 06:22:03.964169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.511 [2024-07-13 06:22:03.964184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.511 [2024-07-13 06:22:03.964197] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.511 [2024-07-13 06:22:03.964226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.511 qpair failed and we were unable to recover it. 00:26:57.511 [2024-07-13 06:22:03.974090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.511 [2024-07-13 06:22:03.974211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.511 [2024-07-13 06:22:03.974235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.511 [2024-07-13 06:22:03.974251] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.511 [2024-07-13 06:22:03.974264] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.511 [2024-07-13 06:22:03.974297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.511 qpair failed and we were unable to recover it. 00:26:57.512 [2024-07-13 06:22:03.984095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.512 [2024-07-13 06:22:03.984238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.512 [2024-07-13 06:22:03.984263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.512 [2024-07-13 06:22:03.984279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.512 [2024-07-13 06:22:03.984293] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.512 [2024-07-13 06:22:03.984322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.512 qpair failed and we were unable to recover it. 00:26:57.512 [2024-07-13 06:22:03.994086] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.512 [2024-07-13 06:22:03.994208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.512 [2024-07-13 06:22:03.994235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.512 [2024-07-13 06:22:03.994250] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.512 [2024-07-13 06:22:03.994264] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.512 [2024-07-13 06:22:03.994292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.512 qpair failed and we were unable to recover it. 00:26:57.512 [2024-07-13 06:22:04.004136] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.512 [2024-07-13 06:22:04.004262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.512 [2024-07-13 06:22:04.004287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.512 [2024-07-13 06:22:04.004302] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.512 [2024-07-13 06:22:04.004316] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.512 [2024-07-13 06:22:04.004345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.512 qpair failed and we were unable to recover it. 00:26:57.512 [2024-07-13 06:22:04.014183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.512 [2024-07-13 06:22:04.014304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.512 [2024-07-13 06:22:04.014328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.512 [2024-07-13 06:22:04.014344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.512 [2024-07-13 06:22:04.014357] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.512 [2024-07-13 06:22:04.014385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.512 qpair failed and we were unable to recover it. 00:26:57.770 [2024-07-13 06:22:04.024196] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.770 [2024-07-13 06:22:04.024323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.770 [2024-07-13 06:22:04.024366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.770 [2024-07-13 06:22:04.024393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.770 [2024-07-13 06:22:04.024408] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.770 [2024-07-13 06:22:04.024440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.770 qpair failed and we were unable to recover it. 00:26:57.770 [2024-07-13 06:22:04.034233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.770 [2024-07-13 06:22:04.034350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.770 [2024-07-13 06:22:04.034377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.770 [2024-07-13 06:22:04.034393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.770 [2024-07-13 06:22:04.034407] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.770 [2024-07-13 06:22:04.034437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.770 qpair failed and we were unable to recover it. 00:26:57.770 [2024-07-13 06:22:04.044239] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.770 [2024-07-13 06:22:04.044356] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.770 [2024-07-13 06:22:04.044380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.770 [2024-07-13 06:22:04.044396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.770 [2024-07-13 06:22:04.044410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.770 [2024-07-13 06:22:04.044440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.770 qpair failed and we were unable to recover it. 00:26:57.770 [2024-07-13 06:22:04.054306] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.770 [2024-07-13 06:22:04.054465] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.770 [2024-07-13 06:22:04.054492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.770 [2024-07-13 06:22:04.054508] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.770 [2024-07-13 06:22:04.054522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.770 [2024-07-13 06:22:04.054550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.770 qpair failed and we were unable to recover it. 00:26:57.770 [2024-07-13 06:22:04.064342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.770 [2024-07-13 06:22:04.064470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.770 [2024-07-13 06:22:04.064497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.770 [2024-07-13 06:22:04.064513] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.770 [2024-07-13 06:22:04.064532] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.770 [2024-07-13 06:22:04.064562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.770 qpair failed and we were unable to recover it. 00:26:57.770 [2024-07-13 06:22:04.074355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.770 [2024-07-13 06:22:04.074475] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.770 [2024-07-13 06:22:04.074500] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.770 [2024-07-13 06:22:04.074515] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.770 [2024-07-13 06:22:04.074529] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.770 [2024-07-13 06:22:04.074559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.770 qpair failed and we were unable to recover it. 00:26:57.770 [2024-07-13 06:22:04.084393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.770 [2024-07-13 06:22:04.084520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.770 [2024-07-13 06:22:04.084546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.770 [2024-07-13 06:22:04.084562] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.770 [2024-07-13 06:22:04.084575] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.770 [2024-07-13 06:22:04.084604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.770 qpair failed and we were unable to recover it. 00:26:57.770 [2024-07-13 06:22:04.094396] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.770 [2024-07-13 06:22:04.094571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.770 [2024-07-13 06:22:04.094598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.770 [2024-07-13 06:22:04.094614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.770 [2024-07-13 06:22:04.094628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.770 [2024-07-13 06:22:04.094656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.770 qpair failed and we were unable to recover it. 00:26:57.770 [2024-07-13 06:22:04.104486] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.770 [2024-07-13 06:22:04.104647] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.770 [2024-07-13 06:22:04.104674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.770 [2024-07-13 06:22:04.104690] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.770 [2024-07-13 06:22:04.104703] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.770 [2024-07-13 06:22:04.104748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.770 qpair failed and we were unable to recover it. 00:26:57.770 [2024-07-13 06:22:04.114467] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.770 [2024-07-13 06:22:04.114593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.770 [2024-07-13 06:22:04.114618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.770 [2024-07-13 06:22:04.114633] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.770 [2024-07-13 06:22:04.114646] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.771 [2024-07-13 06:22:04.114675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.771 qpair failed and we were unable to recover it. 00:26:57.771 [2024-07-13 06:22:04.124527] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.771 [2024-07-13 06:22:04.124651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.771 [2024-07-13 06:22:04.124687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.771 [2024-07-13 06:22:04.124702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.771 [2024-07-13 06:22:04.124716] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.771 [2024-07-13 06:22:04.124745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.771 qpair failed and we were unable to recover it. 00:26:57.771 [2024-07-13 06:22:04.134528] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.771 [2024-07-13 06:22:04.134653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.771 [2024-07-13 06:22:04.134677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.771 [2024-07-13 06:22:04.134693] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.771 [2024-07-13 06:22:04.134706] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.771 [2024-07-13 06:22:04.134735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.771 qpair failed and we were unable to recover it. 00:26:57.771 [2024-07-13 06:22:04.144582] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.771 [2024-07-13 06:22:04.144719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.771 [2024-07-13 06:22:04.144745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.771 [2024-07-13 06:22:04.144761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.771 [2024-07-13 06:22:04.144775] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.771 [2024-07-13 06:22:04.144803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.771 qpair failed and we were unable to recover it. 00:26:57.771 [2024-07-13 06:22:04.154581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.771 [2024-07-13 06:22:04.154697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.771 [2024-07-13 06:22:04.154722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.771 [2024-07-13 06:22:04.154738] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.771 [2024-07-13 06:22:04.154757] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.771 [2024-07-13 06:22:04.154786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.771 qpair failed and we were unable to recover it. 00:26:57.771 [2024-07-13 06:22:04.164636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.771 [2024-07-13 06:22:04.164780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.771 [2024-07-13 06:22:04.164807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.771 [2024-07-13 06:22:04.164823] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.771 [2024-07-13 06:22:04.164836] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.771 [2024-07-13 06:22:04.164871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.771 qpair failed and we were unable to recover it. 00:26:57.771 [2024-07-13 06:22:04.174654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.771 [2024-07-13 06:22:04.174787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.771 [2024-07-13 06:22:04.174814] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.771 [2024-07-13 06:22:04.174829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.771 [2024-07-13 06:22:04.174843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.771 [2024-07-13 06:22:04.174877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.771 qpair failed and we were unable to recover it. 00:26:57.771 [2024-07-13 06:22:04.184670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.771 [2024-07-13 06:22:04.184824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.771 [2024-07-13 06:22:04.184851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.771 [2024-07-13 06:22:04.184872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.771 [2024-07-13 06:22:04.184887] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.771 [2024-07-13 06:22:04.184923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.771 qpair failed and we were unable to recover it. 00:26:57.771 [2024-07-13 06:22:04.194736] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.771 [2024-07-13 06:22:04.194857] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.771 [2024-07-13 06:22:04.194890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.771 [2024-07-13 06:22:04.194910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.771 [2024-07-13 06:22:04.194924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.771 [2024-07-13 06:22:04.194952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.771 qpair failed and we were unable to recover it. 00:26:57.771 [2024-07-13 06:22:04.204737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.771 [2024-07-13 06:22:04.204862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.771 [2024-07-13 06:22:04.204896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.771 [2024-07-13 06:22:04.204912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.771 [2024-07-13 06:22:04.204925] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.771 [2024-07-13 06:22:04.204954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.771 qpair failed and we were unable to recover it. 00:26:57.771 [2024-07-13 06:22:04.214800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.771 [2024-07-13 06:22:04.214937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.771 [2024-07-13 06:22:04.214964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.771 [2024-07-13 06:22:04.214979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.771 [2024-07-13 06:22:04.214996] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.771 [2024-07-13 06:22:04.215025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.771 qpair failed and we were unable to recover it. 00:26:57.771 [2024-07-13 06:22:04.224786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.771 [2024-07-13 06:22:04.224914] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.771 [2024-07-13 06:22:04.224939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.771 [2024-07-13 06:22:04.224954] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.771 [2024-07-13 06:22:04.224967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.771 [2024-07-13 06:22:04.224995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.771 qpair failed and we were unable to recover it. 00:26:57.771 [2024-07-13 06:22:04.234850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.771 [2024-07-13 06:22:04.234980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.771 [2024-07-13 06:22:04.235006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.771 [2024-07-13 06:22:04.235022] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.771 [2024-07-13 06:22:04.235035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.771 [2024-07-13 06:22:04.235065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.771 qpair failed and we were unable to recover it. 00:26:57.771 [2024-07-13 06:22:04.244877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.771 [2024-07-13 06:22:04.245006] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.771 [2024-07-13 06:22:04.245032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.771 [2024-07-13 06:22:04.245047] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.771 [2024-07-13 06:22:04.245067] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.771 [2024-07-13 06:22:04.245096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.771 qpair failed and we were unable to recover it. 00:26:57.771 [2024-07-13 06:22:04.254914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.771 [2024-07-13 06:22:04.255037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.771 [2024-07-13 06:22:04.255062] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.771 [2024-07-13 06:22:04.255078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.771 [2024-07-13 06:22:04.255092] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.771 [2024-07-13 06:22:04.255121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.771 qpair failed and we were unable to recover it. 00:26:57.771 [2024-07-13 06:22:04.264905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.771 [2024-07-13 06:22:04.265031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.772 [2024-07-13 06:22:04.265057] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.772 [2024-07-13 06:22:04.265071] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.772 [2024-07-13 06:22:04.265084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.772 [2024-07-13 06:22:04.265115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.772 qpair failed and we were unable to recover it. 00:26:57.772 [2024-07-13 06:22:04.274932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:57.772 [2024-07-13 06:22:04.275053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:57.772 [2024-07-13 06:22:04.275079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:57.772 [2024-07-13 06:22:04.275094] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:57.772 [2024-07-13 06:22:04.275108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:57.772 [2024-07-13 06:22:04.275138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:57.772 qpair failed and we were unable to recover it. 00:26:58.030 [2024-07-13 06:22:04.284980] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.030 [2024-07-13 06:22:04.285107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.030 [2024-07-13 06:22:04.285139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.030 [2024-07-13 06:22:04.285156] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.030 [2024-07-13 06:22:04.285171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.030 [2024-07-13 06:22:04.285201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.030 qpair failed and we were unable to recover it. 00:26:58.030 [2024-07-13 06:22:04.295101] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.030 [2024-07-13 06:22:04.295232] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.030 [2024-07-13 06:22:04.295259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.030 [2024-07-13 06:22:04.295274] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.030 [2024-07-13 06:22:04.295289] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.030 [2024-07-13 06:22:04.295318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.030 qpair failed and we were unable to recover it. 00:26:58.030 [2024-07-13 06:22:04.305063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.030 [2024-07-13 06:22:04.305188] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.030 [2024-07-13 06:22:04.305214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.030 [2024-07-13 06:22:04.305229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.030 [2024-07-13 06:22:04.305243] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.030 [2024-07-13 06:22:04.305272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.030 qpair failed and we were unable to recover it. 00:26:58.030 [2024-07-13 06:22:04.315075] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.030 [2024-07-13 06:22:04.315225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.030 [2024-07-13 06:22:04.315251] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.030 [2024-07-13 06:22:04.315266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.030 [2024-07-13 06:22:04.315279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.031 [2024-07-13 06:22:04.315309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.031 qpair failed and we were unable to recover it. 00:26:58.031 [2024-07-13 06:22:04.325068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.031 [2024-07-13 06:22:04.325188] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.031 [2024-07-13 06:22:04.325213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.031 [2024-07-13 06:22:04.325229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.031 [2024-07-13 06:22:04.325246] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.031 [2024-07-13 06:22:04.325275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.031 qpair failed and we were unable to recover it. 00:26:58.031 [2024-07-13 06:22:04.335100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.031 [2024-07-13 06:22:04.335227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.031 [2024-07-13 06:22:04.335253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.031 [2024-07-13 06:22:04.335268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.031 [2024-07-13 06:22:04.335288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.031 [2024-07-13 06:22:04.335317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.031 qpair failed and we were unable to recover it. 00:26:58.031 [2024-07-13 06:22:04.345171] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.031 [2024-07-13 06:22:04.345295] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.031 [2024-07-13 06:22:04.345321] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.031 [2024-07-13 06:22:04.345336] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.031 [2024-07-13 06:22:04.345350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.031 [2024-07-13 06:22:04.345379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.031 qpair failed and we were unable to recover it. 00:26:58.031 [2024-07-13 06:22:04.355220] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.031 [2024-07-13 06:22:04.355335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.031 [2024-07-13 06:22:04.355361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.031 [2024-07-13 06:22:04.355377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.031 [2024-07-13 06:22:04.355389] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.031 [2024-07-13 06:22:04.355418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.031 qpair failed and we were unable to recover it. 00:26:58.031 [2024-07-13 06:22:04.365186] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.031 [2024-07-13 06:22:04.365303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.031 [2024-07-13 06:22:04.365328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.031 [2024-07-13 06:22:04.365344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.031 [2024-07-13 06:22:04.365357] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.031 [2024-07-13 06:22:04.365386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.031 qpair failed and we were unable to recover it. 00:26:58.031 [2024-07-13 06:22:04.375264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.031 [2024-07-13 06:22:04.375389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.031 [2024-07-13 06:22:04.375415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.031 [2024-07-13 06:22:04.375430] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.031 [2024-07-13 06:22:04.375447] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.031 [2024-07-13 06:22:04.375475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.031 qpair failed and we were unable to recover it. 00:26:58.031 [2024-07-13 06:22:04.385252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.031 [2024-07-13 06:22:04.385370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.031 [2024-07-13 06:22:04.385396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.031 [2024-07-13 06:22:04.385412] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.031 [2024-07-13 06:22:04.385426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.031 [2024-07-13 06:22:04.385455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.031 qpair failed and we were unable to recover it. 00:26:58.031 [2024-07-13 06:22:04.395267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.031 [2024-07-13 06:22:04.395383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.031 [2024-07-13 06:22:04.395409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.031 [2024-07-13 06:22:04.395425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.031 [2024-07-13 06:22:04.395439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.031 [2024-07-13 06:22:04.395468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.031 qpair failed and we were unable to recover it. 00:26:58.031 [2024-07-13 06:22:04.405323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.031 [2024-07-13 06:22:04.405443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.031 [2024-07-13 06:22:04.405469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.031 [2024-07-13 06:22:04.405484] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.031 [2024-07-13 06:22:04.405499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.031 [2024-07-13 06:22:04.405528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.031 qpair failed and we were unable to recover it. 00:26:58.031 [2024-07-13 06:22:04.415332] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.031 [2024-07-13 06:22:04.415449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.031 [2024-07-13 06:22:04.415475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.031 [2024-07-13 06:22:04.415490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.031 [2024-07-13 06:22:04.415503] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.031 [2024-07-13 06:22:04.415533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.031 qpair failed and we were unable to recover it. 00:26:58.031 [2024-07-13 06:22:04.425419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.031 [2024-07-13 06:22:04.425570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.031 [2024-07-13 06:22:04.425596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.031 [2024-07-13 06:22:04.425616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.031 [2024-07-13 06:22:04.425631] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.031 [2024-07-13 06:22:04.425660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.031 qpair failed and we were unable to recover it. 00:26:58.031 [2024-07-13 06:22:04.435477] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.031 [2024-07-13 06:22:04.435591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.031 [2024-07-13 06:22:04.435617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.031 [2024-07-13 06:22:04.435633] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.031 [2024-07-13 06:22:04.435646] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.031 [2024-07-13 06:22:04.435674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.031 qpair failed and we were unable to recover it. 00:26:58.031 [2024-07-13 06:22:04.445405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.031 [2024-07-13 06:22:04.445525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.031 [2024-07-13 06:22:04.445550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.031 [2024-07-13 06:22:04.445565] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.031 [2024-07-13 06:22:04.445578] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.031 [2024-07-13 06:22:04.445608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.031 qpair failed and we were unable to recover it. 00:26:58.031 [2024-07-13 06:22:04.455472] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.031 [2024-07-13 06:22:04.455603] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.031 [2024-07-13 06:22:04.455628] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.031 [2024-07-13 06:22:04.455643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.031 [2024-07-13 06:22:04.455656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.031 [2024-07-13 06:22:04.455685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.031 qpair failed and we were unable to recover it. 00:26:58.032 [2024-07-13 06:22:04.465463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.032 [2024-07-13 06:22:04.465586] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.032 [2024-07-13 06:22:04.465611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.032 [2024-07-13 06:22:04.465627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.032 [2024-07-13 06:22:04.465643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.032 [2024-07-13 06:22:04.465672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.032 qpair failed and we were unable to recover it. 00:26:58.032 [2024-07-13 06:22:04.475520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.032 [2024-07-13 06:22:04.475640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.032 [2024-07-13 06:22:04.475668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.032 [2024-07-13 06:22:04.475684] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.032 [2024-07-13 06:22:04.475698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.032 [2024-07-13 06:22:04.475728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.032 qpair failed and we were unable to recover it. 00:26:58.032 [2024-07-13 06:22:04.485513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.032 [2024-07-13 06:22:04.485678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.032 [2024-07-13 06:22:04.485704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.032 [2024-07-13 06:22:04.485721] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.032 [2024-07-13 06:22:04.485735] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.032 [2024-07-13 06:22:04.485765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.032 qpair failed and we were unable to recover it. 00:26:58.032 [2024-07-13 06:22:04.495615] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.032 [2024-07-13 06:22:04.495762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.032 [2024-07-13 06:22:04.495789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.032 [2024-07-13 06:22:04.495804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.032 [2024-07-13 06:22:04.495818] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.032 [2024-07-13 06:22:04.495847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.032 qpair failed and we were unable to recover it. 00:26:58.032 [2024-07-13 06:22:04.505612] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.032 [2024-07-13 06:22:04.505739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.032 [2024-07-13 06:22:04.505765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.032 [2024-07-13 06:22:04.505780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.032 [2024-07-13 06:22:04.505792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.032 [2024-07-13 06:22:04.505822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.032 qpair failed and we were unable to recover it. 00:26:58.032 [2024-07-13 06:22:04.515616] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.032 [2024-07-13 06:22:04.515752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.032 [2024-07-13 06:22:04.515778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.032 [2024-07-13 06:22:04.515798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.032 [2024-07-13 06:22:04.515814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.032 [2024-07-13 06:22:04.515843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.032 qpair failed and we were unable to recover it. 00:26:58.032 [2024-07-13 06:22:04.525630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.032 [2024-07-13 06:22:04.525747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.032 [2024-07-13 06:22:04.525772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.032 [2024-07-13 06:22:04.525787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.032 [2024-07-13 06:22:04.525801] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.032 [2024-07-13 06:22:04.525830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.032 qpair failed and we were unable to recover it. 00:26:58.032 [2024-07-13 06:22:04.535700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.032 [2024-07-13 06:22:04.535836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.032 [2024-07-13 06:22:04.535863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.032 [2024-07-13 06:22:04.535890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.032 [2024-07-13 06:22:04.535904] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.032 [2024-07-13 06:22:04.535933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.032 qpair failed and we were unable to recover it. 00:26:58.291 [2024-07-13 06:22:04.545724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.291 [2024-07-13 06:22:04.545848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.291 [2024-07-13 06:22:04.545886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.291 [2024-07-13 06:22:04.545904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.291 [2024-07-13 06:22:04.545919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.291 [2024-07-13 06:22:04.545950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-07-13 06:22:04.555791] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.291 [2024-07-13 06:22:04.555945] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.291 [2024-07-13 06:22:04.555973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.291 [2024-07-13 06:22:04.555989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.291 [2024-07-13 06:22:04.556002] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.291 [2024-07-13 06:22:04.556033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.291 [2024-07-13 06:22:04.565845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.291 [2024-07-13 06:22:04.565971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.291 [2024-07-13 06:22:04.565997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.291 [2024-07-13 06:22:04.566013] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.291 [2024-07-13 06:22:04.566026] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.291 [2024-07-13 06:22:04.566056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.291 qpair failed and we were unable to recover it. 00:26:58.292 [2024-07-13 06:22:04.575811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.292 [2024-07-13 06:22:04.575989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.292 [2024-07-13 06:22:04.576016] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.292 [2024-07-13 06:22:04.576031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.292 [2024-07-13 06:22:04.576045] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.292 [2024-07-13 06:22:04.576074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-07-13 06:22:04.585822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.292 [2024-07-13 06:22:04.585947] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.292 [2024-07-13 06:22:04.585974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.292 [2024-07-13 06:22:04.585989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.292 [2024-07-13 06:22:04.586002] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.292 [2024-07-13 06:22:04.586033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-07-13 06:22:04.595844] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.292 [2024-07-13 06:22:04.595972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.292 [2024-07-13 06:22:04.595998] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.292 [2024-07-13 06:22:04.596013] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.292 [2024-07-13 06:22:04.596026] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.292 [2024-07-13 06:22:04.596055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-07-13 06:22:04.605891] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.292 [2024-07-13 06:22:04.606022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.292 [2024-07-13 06:22:04.606048] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.292 [2024-07-13 06:22:04.606069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.292 [2024-07-13 06:22:04.606084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.292 [2024-07-13 06:22:04.606113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-07-13 06:22:04.615927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.292 [2024-07-13 06:22:04.616052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.292 [2024-07-13 06:22:04.616077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.292 [2024-07-13 06:22:04.616093] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.292 [2024-07-13 06:22:04.616105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.292 [2024-07-13 06:22:04.616136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-07-13 06:22:04.625961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.292 [2024-07-13 06:22:04.626080] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.292 [2024-07-13 06:22:04.626106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.292 [2024-07-13 06:22:04.626121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.292 [2024-07-13 06:22:04.626136] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.292 [2024-07-13 06:22:04.626165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-07-13 06:22:04.635976] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.292 [2024-07-13 06:22:04.636106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.292 [2024-07-13 06:22:04.636132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.292 [2024-07-13 06:22:04.636147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.292 [2024-07-13 06:22:04.636160] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.292 [2024-07-13 06:22:04.636189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-07-13 06:22:04.645988] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.292 [2024-07-13 06:22:04.646108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.292 [2024-07-13 06:22:04.646133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.292 [2024-07-13 06:22:04.646148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.292 [2024-07-13 06:22:04.646162] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.292 [2024-07-13 06:22:04.646191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-07-13 06:22:04.656049] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.292 [2024-07-13 06:22:04.656170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.292 [2024-07-13 06:22:04.656196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.292 [2024-07-13 06:22:04.656210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.292 [2024-07-13 06:22:04.656224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.292 [2024-07-13 06:22:04.656252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-07-13 06:22:04.666052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.292 [2024-07-13 06:22:04.666176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.292 [2024-07-13 06:22:04.666201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.292 [2024-07-13 06:22:04.666216] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.292 [2024-07-13 06:22:04.666229] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.292 [2024-07-13 06:22:04.666258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.292 qpair failed and we were unable to recover it. 00:26:58.292 [2024-07-13 06:22:04.676112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.292 [2024-07-13 06:22:04.676239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.292 [2024-07-13 06:22:04.676266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.292 [2024-07-13 06:22:04.676285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.292 [2024-07-13 06:22:04.676301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.292 [2024-07-13 06:22:04.676331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-07-13 06:22:04.686099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.293 [2024-07-13 06:22:04.686228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.293 [2024-07-13 06:22:04.686254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.293 [2024-07-13 06:22:04.686270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.293 [2024-07-13 06:22:04.686283] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.293 [2024-07-13 06:22:04.686313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-07-13 06:22:04.696183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.293 [2024-07-13 06:22:04.696308] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.293 [2024-07-13 06:22:04.696334] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.293 [2024-07-13 06:22:04.696358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.293 [2024-07-13 06:22:04.696372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.293 [2024-07-13 06:22:04.696402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-07-13 06:22:04.706182] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.293 [2024-07-13 06:22:04.706304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.293 [2024-07-13 06:22:04.706331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.293 [2024-07-13 06:22:04.706347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.293 [2024-07-13 06:22:04.706360] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.293 [2024-07-13 06:22:04.706390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-07-13 06:22:04.716189] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.293 [2024-07-13 06:22:04.716313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.293 [2024-07-13 06:22:04.716339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.293 [2024-07-13 06:22:04.716355] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.293 [2024-07-13 06:22:04.716372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.293 [2024-07-13 06:22:04.716402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-07-13 06:22:04.726223] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.293 [2024-07-13 06:22:04.726344] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.293 [2024-07-13 06:22:04.726371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.293 [2024-07-13 06:22:04.726387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.293 [2024-07-13 06:22:04.726402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.293 [2024-07-13 06:22:04.726431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-07-13 06:22:04.736292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.293 [2024-07-13 06:22:04.736426] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.293 [2024-07-13 06:22:04.736453] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.293 [2024-07-13 06:22:04.736469] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.293 [2024-07-13 06:22:04.736483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.293 [2024-07-13 06:22:04.736512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-07-13 06:22:04.746271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.293 [2024-07-13 06:22:04.746407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.293 [2024-07-13 06:22:04.746433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.293 [2024-07-13 06:22:04.746448] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.293 [2024-07-13 06:22:04.746463] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.293 [2024-07-13 06:22:04.746492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-07-13 06:22:04.756338] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.293 [2024-07-13 06:22:04.756463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.293 [2024-07-13 06:22:04.756488] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.293 [2024-07-13 06:22:04.756503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.293 [2024-07-13 06:22:04.756517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.293 [2024-07-13 06:22:04.756545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-07-13 06:22:04.766352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.293 [2024-07-13 06:22:04.766475] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.293 [2024-07-13 06:22:04.766501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.293 [2024-07-13 06:22:04.766517] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.293 [2024-07-13 06:22:04.766530] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.293 [2024-07-13 06:22:04.766559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-07-13 06:22:04.776368] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.293 [2024-07-13 06:22:04.776549] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.293 [2024-07-13 06:22:04.776575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.293 [2024-07-13 06:22:04.776590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.293 [2024-07-13 06:22:04.776604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.293 [2024-07-13 06:22:04.776633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-07-13 06:22:04.786383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.293 [2024-07-13 06:22:04.786527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.293 [2024-07-13 06:22:04.786557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.293 [2024-07-13 06:22:04.786574] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.293 [2024-07-13 06:22:04.786589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.293 [2024-07-13 06:22:04.786618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.293 qpair failed and we were unable to recover it. 00:26:58.293 [2024-07-13 06:22:04.796440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.293 [2024-07-13 06:22:04.796565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.293 [2024-07-13 06:22:04.796591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.293 [2024-07-13 06:22:04.796607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.294 [2024-07-13 06:22:04.796621] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.294 [2024-07-13 06:22:04.796655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.294 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-13 06:22:04.806481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.553 [2024-07-13 06:22:04.806626] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.553 [2024-07-13 06:22:04.806655] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.553 [2024-07-13 06:22:04.806671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.553 [2024-07-13 06:22:04.806686] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.553 [2024-07-13 06:22:04.806716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-13 06:22:04.816488] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.553 [2024-07-13 06:22:04.816617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.553 [2024-07-13 06:22:04.816643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.553 [2024-07-13 06:22:04.816659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.553 [2024-07-13 06:22:04.816673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.553 [2024-07-13 06:22:04.816703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-13 06:22:04.826520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.553 [2024-07-13 06:22:04.826656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.553 [2024-07-13 06:22:04.826682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.553 [2024-07-13 06:22:04.826698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.553 [2024-07-13 06:22:04.826711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.553 [2024-07-13 06:22:04.826740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-13 06:22:04.836561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.553 [2024-07-13 06:22:04.836693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.553 [2024-07-13 06:22:04.836719] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.553 [2024-07-13 06:22:04.836735] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.553 [2024-07-13 06:22:04.836749] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.553 [2024-07-13 06:22:04.836793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.553 qpair failed and we were unable to recover it. 00:26:58.553 [2024-07-13 06:22:04.846565] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.554 [2024-07-13 06:22:04.846688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.554 [2024-07-13 06:22:04.846714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.554 [2024-07-13 06:22:04.846729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.554 [2024-07-13 06:22:04.846743] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.554 [2024-07-13 06:22:04.846772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-13 06:22:04.856592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.554 [2024-07-13 06:22:04.856721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.554 [2024-07-13 06:22:04.856747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.554 [2024-07-13 06:22:04.856762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.554 [2024-07-13 06:22:04.856776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.554 [2024-07-13 06:22:04.856804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-13 06:22:04.866649] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.554 [2024-07-13 06:22:04.866773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.554 [2024-07-13 06:22:04.866799] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.554 [2024-07-13 06:22:04.866815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.554 [2024-07-13 06:22:04.866830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.554 [2024-07-13 06:22:04.866859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-13 06:22:04.876682] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.554 [2024-07-13 06:22:04.876822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.554 [2024-07-13 06:22:04.876863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.554 [2024-07-13 06:22:04.876888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.554 [2024-07-13 06:22:04.876903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.554 [2024-07-13 06:22:04.876931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-13 06:22:04.886667] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.554 [2024-07-13 06:22:04.886790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.554 [2024-07-13 06:22:04.886815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.554 [2024-07-13 06:22:04.886830] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.554 [2024-07-13 06:22:04.886844] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.554 [2024-07-13 06:22:04.886887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-13 06:22:04.896712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.554 [2024-07-13 06:22:04.896844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.554 [2024-07-13 06:22:04.896875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.554 [2024-07-13 06:22:04.896892] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.554 [2024-07-13 06:22:04.896906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.554 [2024-07-13 06:22:04.896935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-13 06:22:04.906766] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.554 [2024-07-13 06:22:04.906902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.554 [2024-07-13 06:22:04.906928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.554 [2024-07-13 06:22:04.906944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.554 [2024-07-13 06:22:04.906958] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.554 [2024-07-13 06:22:04.906987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-13 06:22:04.916763] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.554 [2024-07-13 06:22:04.916900] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.554 [2024-07-13 06:22:04.916926] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.554 [2024-07-13 06:22:04.916942] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.554 [2024-07-13 06:22:04.916956] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.554 [2024-07-13 06:22:04.916991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-13 06:22:04.926781] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.554 [2024-07-13 06:22:04.926914] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.554 [2024-07-13 06:22:04.926940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.554 [2024-07-13 06:22:04.926955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.554 [2024-07-13 06:22:04.926969] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.554 [2024-07-13 06:22:04.926997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-13 06:22:04.936832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.554 [2024-07-13 06:22:04.936972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.554 [2024-07-13 06:22:04.936997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.554 [2024-07-13 06:22:04.937012] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.554 [2024-07-13 06:22:04.937026] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.554 [2024-07-13 06:22:04.937055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-13 06:22:04.946881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.554 [2024-07-13 06:22:04.947042] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.554 [2024-07-13 06:22:04.947072] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.554 [2024-07-13 06:22:04.947088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.554 [2024-07-13 06:22:04.947102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.554 [2024-07-13 06:22:04.947132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.554 [2024-07-13 06:22:04.956894] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.554 [2024-07-13 06:22:04.957053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.554 [2024-07-13 06:22:04.957079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.554 [2024-07-13 06:22:04.957094] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.554 [2024-07-13 06:22:04.957108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.554 [2024-07-13 06:22:04.957137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.554 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-13 06:22:04.966929] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.555 [2024-07-13 06:22:04.967068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.555 [2024-07-13 06:22:04.967099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.555 [2024-07-13 06:22:04.967115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.555 [2024-07-13 06:22:04.967129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.555 [2024-07-13 06:22:04.967157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-13 06:22:04.977009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.555 [2024-07-13 06:22:04.977145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.555 [2024-07-13 06:22:04.977172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.555 [2024-07-13 06:22:04.977187] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.555 [2024-07-13 06:22:04.977201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.555 [2024-07-13 06:22:04.977230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-13 06:22:04.986973] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.555 [2024-07-13 06:22:04.987099] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.555 [2024-07-13 06:22:04.987124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.555 [2024-07-13 06:22:04.987139] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.555 [2024-07-13 06:22:04.987154] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.555 [2024-07-13 06:22:04.987182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-13 06:22:04.996981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.555 [2024-07-13 06:22:04.997103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.555 [2024-07-13 06:22:04.997139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.555 [2024-07-13 06:22:04.997154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.555 [2024-07-13 06:22:04.997168] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.555 [2024-07-13 06:22:04.997197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-13 06:22:05.007048] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.555 [2024-07-13 06:22:05.007170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.555 [2024-07-13 06:22:05.007196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.555 [2024-07-13 06:22:05.007212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.555 [2024-07-13 06:22:05.007226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.555 [2024-07-13 06:22:05.007261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-13 06:22:05.017092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.555 [2024-07-13 06:22:05.017228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.555 [2024-07-13 06:22:05.017254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.555 [2024-07-13 06:22:05.017269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.555 [2024-07-13 06:22:05.017283] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.555 [2024-07-13 06:22:05.017312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-13 06:22:05.027102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.555 [2024-07-13 06:22:05.027220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.555 [2024-07-13 06:22:05.027246] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.555 [2024-07-13 06:22:05.027262] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.555 [2024-07-13 06:22:05.027276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.555 [2024-07-13 06:22:05.027309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-13 06:22:05.037150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.555 [2024-07-13 06:22:05.037299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.555 [2024-07-13 06:22:05.037325] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.555 [2024-07-13 06:22:05.037340] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.555 [2024-07-13 06:22:05.037354] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.555 [2024-07-13 06:22:05.037399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-13 06:22:05.047122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.555 [2024-07-13 06:22:05.047240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.555 [2024-07-13 06:22:05.047266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.555 [2024-07-13 06:22:05.047281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.555 [2024-07-13 06:22:05.047295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.555 [2024-07-13 06:22:05.047324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.555 [2024-07-13 06:22:05.057171] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.555 [2024-07-13 06:22:05.057298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.555 [2024-07-13 06:22:05.057328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.555 [2024-07-13 06:22:05.057345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.555 [2024-07-13 06:22:05.057358] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.555 [2024-07-13 06:22:05.057387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.555 qpair failed and we were unable to recover it. 00:26:58.815 [2024-07-13 06:22:05.067209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.815 [2024-07-13 06:22:05.067334] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.815 [2024-07-13 06:22:05.067363] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.815 [2024-07-13 06:22:05.067393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.815 [2024-07-13 06:22:05.067419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.815 [2024-07-13 06:22:05.067463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.815 qpair failed and we were unable to recover it. 00:26:58.815 [2024-07-13 06:22:05.077240] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.815 [2024-07-13 06:22:05.077363] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.815 [2024-07-13 06:22:05.077390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.815 [2024-07-13 06:22:05.077407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.815 [2024-07-13 06:22:05.077421] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.815 [2024-07-13 06:22:05.077450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.815 qpair failed and we were unable to recover it. 00:26:58.815 [2024-07-13 06:22:05.087233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.815 [2024-07-13 06:22:05.087376] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.815 [2024-07-13 06:22:05.087403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.816 [2024-07-13 06:22:05.087419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.816 [2024-07-13 06:22:05.087433] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.816 [2024-07-13 06:22:05.087462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.816 qpair failed and we were unable to recover it. 00:26:58.816 [2024-07-13 06:22:05.097313] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.816 [2024-07-13 06:22:05.097440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.816 [2024-07-13 06:22:05.097465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.816 [2024-07-13 06:22:05.097480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.816 [2024-07-13 06:22:05.097494] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.816 [2024-07-13 06:22:05.097528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.816 qpair failed and we were unable to recover it. 00:26:58.816 [2024-07-13 06:22:05.107342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.816 [2024-07-13 06:22:05.107496] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.816 [2024-07-13 06:22:05.107523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.816 [2024-07-13 06:22:05.107539] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.816 [2024-07-13 06:22:05.107569] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.816 [2024-07-13 06:22:05.107599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.816 qpair failed and we were unable to recover it. 00:26:58.816 [2024-07-13 06:22:05.117351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.816 [2024-07-13 06:22:05.117474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.816 [2024-07-13 06:22:05.117501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.816 [2024-07-13 06:22:05.117517] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.816 [2024-07-13 06:22:05.117531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.816 [2024-07-13 06:22:05.117560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.816 qpair failed and we were unable to recover it. 00:26:58.816 [2024-07-13 06:22:05.127385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.816 [2024-07-13 06:22:05.127506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.816 [2024-07-13 06:22:05.127532] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.816 [2024-07-13 06:22:05.127547] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.816 [2024-07-13 06:22:05.127560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.816 [2024-07-13 06:22:05.127589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.816 qpair failed and we were unable to recover it. 00:26:58.816 [2024-07-13 06:22:05.137415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.816 [2024-07-13 06:22:05.137544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.816 [2024-07-13 06:22:05.137570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.816 [2024-07-13 06:22:05.137587] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.816 [2024-07-13 06:22:05.137600] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.816 [2024-07-13 06:22:05.137629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.816 qpair failed and we were unable to recover it. 00:26:58.816 [2024-07-13 06:22:05.147433] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.816 [2024-07-13 06:22:05.147551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.816 [2024-07-13 06:22:05.147581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.816 [2024-07-13 06:22:05.147598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.816 [2024-07-13 06:22:05.147612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.816 [2024-07-13 06:22:05.147641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.816 qpair failed and we were unable to recover it. 00:26:58.816 [2024-07-13 06:22:05.157463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.816 [2024-07-13 06:22:05.157584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.816 [2024-07-13 06:22:05.157610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.816 [2024-07-13 06:22:05.157625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.816 [2024-07-13 06:22:05.157639] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.816 [2024-07-13 06:22:05.157667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.816 qpair failed and we were unable to recover it. 00:26:58.816 [2024-07-13 06:22:05.167468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.816 [2024-07-13 06:22:05.167585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.816 [2024-07-13 06:22:05.167609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.816 [2024-07-13 06:22:05.167625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.816 [2024-07-13 06:22:05.167638] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.816 [2024-07-13 06:22:05.167667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.816 qpair failed and we were unable to recover it. 00:26:58.816 [2024-07-13 06:22:05.177516] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.816 [2024-07-13 06:22:05.177642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.816 [2024-07-13 06:22:05.177670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.816 [2024-07-13 06:22:05.177689] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.816 [2024-07-13 06:22:05.177704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.816 [2024-07-13 06:22:05.177734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.816 qpair failed and we were unable to recover it. 00:26:58.816 [2024-07-13 06:22:05.187535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.816 [2024-07-13 06:22:05.187654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.816 [2024-07-13 06:22:05.187679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.816 [2024-07-13 06:22:05.187695] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.816 [2024-07-13 06:22:05.187709] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.816 [2024-07-13 06:22:05.187743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.816 qpair failed and we were unable to recover it. 00:26:58.816 [2024-07-13 06:22:05.197571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.816 [2024-07-13 06:22:05.197697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.816 [2024-07-13 06:22:05.197725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.816 [2024-07-13 06:22:05.197744] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.816 [2024-07-13 06:22:05.197759] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.816 [2024-07-13 06:22:05.197804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.816 qpair failed and we were unable to recover it. 00:26:58.816 [2024-07-13 06:22:05.207613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.816 [2024-07-13 06:22:05.207735] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.816 [2024-07-13 06:22:05.207761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.816 [2024-07-13 06:22:05.207776] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.816 [2024-07-13 06:22:05.207789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.816 [2024-07-13 06:22:05.207819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.816 qpair failed and we were unable to recover it. 00:26:58.816 [2024-07-13 06:22:05.217654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.816 [2024-07-13 06:22:05.217801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.816 [2024-07-13 06:22:05.217829] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.816 [2024-07-13 06:22:05.217844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.816 [2024-07-13 06:22:05.217858] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.816 [2024-07-13 06:22:05.217894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.816 qpair failed and we were unable to recover it. 00:26:58.816 [2024-07-13 06:22:05.227630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.816 [2024-07-13 06:22:05.227755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.816 [2024-07-13 06:22:05.227780] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.816 [2024-07-13 06:22:05.227794] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.816 [2024-07-13 06:22:05.227808] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.816 [2024-07-13 06:22:05.227836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.816 qpair failed and we were unable to recover it. 00:26:58.816 [2024-07-13 06:22:05.237713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.816 [2024-07-13 06:22:05.237863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.816 [2024-07-13 06:22:05.237902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.816 [2024-07-13 06:22:05.237918] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.816 [2024-07-13 06:22:05.237932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.816 [2024-07-13 06:22:05.237961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.816 qpair failed and we were unable to recover it. 00:26:58.816 [2024-07-13 06:22:05.247685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.816 [2024-07-13 06:22:05.247802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.816 [2024-07-13 06:22:05.247826] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.816 [2024-07-13 06:22:05.247841] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.816 [2024-07-13 06:22:05.247855] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.816 [2024-07-13 06:22:05.247894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.816 qpair failed and we were unable to recover it. 00:26:58.816 [2024-07-13 06:22:05.257743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.816 [2024-07-13 06:22:05.257881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.816 [2024-07-13 06:22:05.257905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.816 [2024-07-13 06:22:05.257920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.816 [2024-07-13 06:22:05.257934] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.816 [2024-07-13 06:22:05.257963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.816 qpair failed and we were unable to recover it. 00:26:58.816 [2024-07-13 06:22:05.267790] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.817 [2024-07-13 06:22:05.267971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.817 [2024-07-13 06:22:05.267998] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.817 [2024-07-13 06:22:05.268014] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.817 [2024-07-13 06:22:05.268028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.817 [2024-07-13 06:22:05.268057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.817 qpair failed and we were unable to recover it. 00:26:58.817 [2024-07-13 06:22:05.277783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.817 [2024-07-13 06:22:05.277909] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.817 [2024-07-13 06:22:05.277934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.817 [2024-07-13 06:22:05.277949] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.817 [2024-07-13 06:22:05.277968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.817 [2024-07-13 06:22:05.277997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.817 qpair failed and we were unable to recover it. 00:26:58.817 [2024-07-13 06:22:05.287833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.817 [2024-07-13 06:22:05.287966] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.817 [2024-07-13 06:22:05.287992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.817 [2024-07-13 06:22:05.288008] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.817 [2024-07-13 06:22:05.288021] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.817 [2024-07-13 06:22:05.288049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.817 qpair failed and we were unable to recover it. 00:26:58.817 [2024-07-13 06:22:05.297915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.817 [2024-07-13 06:22:05.298058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.817 [2024-07-13 06:22:05.298083] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.817 [2024-07-13 06:22:05.298099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.817 [2024-07-13 06:22:05.298113] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.817 [2024-07-13 06:22:05.298142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.817 qpair failed and we were unable to recover it. 00:26:58.817 [2024-07-13 06:22:05.307883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.817 [2024-07-13 06:22:05.308018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.817 [2024-07-13 06:22:05.308044] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.817 [2024-07-13 06:22:05.308059] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.817 [2024-07-13 06:22:05.308072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.817 [2024-07-13 06:22:05.308100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.817 qpair failed and we were unable to recover it. 00:26:58.817 [2024-07-13 06:22:05.317893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:58.817 [2024-07-13 06:22:05.318029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:58.817 [2024-07-13 06:22:05.318055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:58.817 [2024-07-13 06:22:05.318071] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:58.817 [2024-07-13 06:22:05.318085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:58.817 [2024-07-13 06:22:05.318113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:58.817 qpair failed and we were unable to recover it. 00:26:59.077 [2024-07-13 06:22:05.328006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.077 [2024-07-13 06:22:05.328137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.077 [2024-07-13 06:22:05.328164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.077 [2024-07-13 06:22:05.328180] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.077 [2024-07-13 06:22:05.328193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.077 [2024-07-13 06:22:05.328224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.077 qpair failed and we were unable to recover it. 00:26:59.077 [2024-07-13 06:22:05.338041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.077 [2024-07-13 06:22:05.338168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.077 [2024-07-13 06:22:05.338205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.077 [2024-07-13 06:22:05.338221] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.077 [2024-07-13 06:22:05.338235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.077 [2024-07-13 06:22:05.338264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.077 qpair failed and we were unable to recover it. 00:26:59.077 [2024-07-13 06:22:05.348005] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.077 [2024-07-13 06:22:05.348157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.077 [2024-07-13 06:22:05.348197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.077 [2024-07-13 06:22:05.348215] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.077 [2024-07-13 06:22:05.348229] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.077 [2024-07-13 06:22:05.348259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.077 qpair failed and we were unable to recover it. 00:26:59.077 [2024-07-13 06:22:05.358078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.077 [2024-07-13 06:22:05.358222] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.077 [2024-07-13 06:22:05.358249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.077 [2024-07-13 06:22:05.358266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.077 [2024-07-13 06:22:05.358280] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.077 [2024-07-13 06:22:05.358308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.077 qpair failed and we were unable to recover it. 00:26:59.077 [2024-07-13 06:22:05.368066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.077 [2024-07-13 06:22:05.368185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.077 [2024-07-13 06:22:05.368211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.077 [2024-07-13 06:22:05.368227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.077 [2024-07-13 06:22:05.368246] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.077 [2024-07-13 06:22:05.368276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.077 qpair failed and we were unable to recover it. 00:26:59.077 [2024-07-13 06:22:05.378114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.077 [2024-07-13 06:22:05.378241] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.077 [2024-07-13 06:22:05.378268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.077 [2024-07-13 06:22:05.378283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.077 [2024-07-13 06:22:05.378297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.077 [2024-07-13 06:22:05.378325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.077 qpair failed and we were unable to recover it. 00:26:59.077 [2024-07-13 06:22:05.388197] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.077 [2024-07-13 06:22:05.388319] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.077 [2024-07-13 06:22:05.388354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.077 [2024-07-13 06:22:05.388370] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.077 [2024-07-13 06:22:05.388383] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.077 [2024-07-13 06:22:05.388412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.077 qpair failed and we were unable to recover it. 00:26:59.077 [2024-07-13 06:22:05.398149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.077 [2024-07-13 06:22:05.398281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.077 [2024-07-13 06:22:05.398308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.077 [2024-07-13 06:22:05.398324] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.077 [2024-07-13 06:22:05.398338] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.077 [2024-07-13 06:22:05.398366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.077 qpair failed and we were unable to recover it. 00:26:59.077 [2024-07-13 06:22:05.408163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.077 [2024-07-13 06:22:05.408280] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.077 [2024-07-13 06:22:05.408304] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.077 [2024-07-13 06:22:05.408319] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.077 [2024-07-13 06:22:05.408333] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.077 [2024-07-13 06:22:05.408362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.077 qpair failed and we were unable to recover it. 00:26:59.077 [2024-07-13 06:22:05.418199] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.077 [2024-07-13 06:22:05.418340] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.077 [2024-07-13 06:22:05.418366] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.077 [2024-07-13 06:22:05.418381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.077 [2024-07-13 06:22:05.418395] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.077 [2024-07-13 06:22:05.418423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.077 qpair failed and we were unable to recover it. 00:26:59.077 [2024-07-13 06:22:05.428256] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.077 [2024-07-13 06:22:05.428421] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.077 [2024-07-13 06:22:05.428448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.077 [2024-07-13 06:22:05.428464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.077 [2024-07-13 06:22:05.428478] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.077 [2024-07-13 06:22:05.428507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.077 qpair failed and we were unable to recover it. 00:26:59.077 [2024-07-13 06:22:05.438258] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.077 [2024-07-13 06:22:05.438383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.077 [2024-07-13 06:22:05.438408] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.077 [2024-07-13 06:22:05.438423] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.077 [2024-07-13 06:22:05.438436] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.077 [2024-07-13 06:22:05.438465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.077 qpair failed and we were unable to recover it. 00:26:59.077 [2024-07-13 06:22:05.448293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.077 [2024-07-13 06:22:05.448411] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.077 [2024-07-13 06:22:05.448446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.077 [2024-07-13 06:22:05.448462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.077 [2024-07-13 06:22:05.448475] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.077 [2024-07-13 06:22:05.448504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.077 qpair failed and we were unable to recover it. 00:26:59.077 [2024-07-13 06:22:05.458354] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.077 [2024-07-13 06:22:05.458478] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.077 [2024-07-13 06:22:05.458503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.077 [2024-07-13 06:22:05.458518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.077 [2024-07-13 06:22:05.458540] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.077 [2024-07-13 06:22:05.458570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.077 qpair failed and we were unable to recover it. 00:26:59.077 [2024-07-13 06:22:05.468357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.077 [2024-07-13 06:22:05.468479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.077 [2024-07-13 06:22:05.468503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.077 [2024-07-13 06:22:05.468518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.077 [2024-07-13 06:22:05.468531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.077 [2024-07-13 06:22:05.468560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.077 qpair failed and we were unable to recover it. 00:26:59.077 [2024-07-13 06:22:05.478470] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.077 [2024-07-13 06:22:05.478633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.077 [2024-07-13 06:22:05.478660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.077 [2024-07-13 06:22:05.478676] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.077 [2024-07-13 06:22:05.478705] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.077 [2024-07-13 06:22:05.478734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.077 qpair failed and we were unable to recover it. 00:26:59.077 [2024-07-13 06:22:05.488432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.077 [2024-07-13 06:22:05.488547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.077 [2024-07-13 06:22:05.488572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.077 [2024-07-13 06:22:05.488588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.077 [2024-07-13 06:22:05.488601] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.077 [2024-07-13 06:22:05.488630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.077 qpair failed and we were unable to recover it. 00:26:59.077 [2024-07-13 06:22:05.498469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.077 [2024-07-13 06:22:05.498595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.077 [2024-07-13 06:22:05.498619] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.077 [2024-07-13 06:22:05.498635] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.078 [2024-07-13 06:22:05.498648] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.078 [2024-07-13 06:22:05.498678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.078 qpair failed and we were unable to recover it. 00:26:59.078 [2024-07-13 06:22:05.508473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.078 [2024-07-13 06:22:05.508600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.078 [2024-07-13 06:22:05.508625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.078 [2024-07-13 06:22:05.508641] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.078 [2024-07-13 06:22:05.508655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.078 [2024-07-13 06:22:05.508684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.078 qpair failed and we were unable to recover it. 00:26:59.078 [2024-07-13 06:22:05.518525] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.078 [2024-07-13 06:22:05.518679] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.078 [2024-07-13 06:22:05.518706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.078 [2024-07-13 06:22:05.518722] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.078 [2024-07-13 06:22:05.518736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.078 [2024-07-13 06:22:05.518764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.078 qpair failed and we were unable to recover it. 00:26:59.078 [2024-07-13 06:22:05.528656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.078 [2024-07-13 06:22:05.528793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.078 [2024-07-13 06:22:05.528819] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.078 [2024-07-13 06:22:05.528835] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.078 [2024-07-13 06:22:05.528848] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.078 [2024-07-13 06:22:05.528883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.078 qpair failed and we were unable to recover it. 00:26:59.078 [2024-07-13 06:22:05.538586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.078 [2024-07-13 06:22:05.538712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.078 [2024-07-13 06:22:05.538739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.078 [2024-07-13 06:22:05.538755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.078 [2024-07-13 06:22:05.538768] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.078 [2024-07-13 06:22:05.538797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.078 qpair failed and we were unable to recover it. 00:26:59.078 [2024-07-13 06:22:05.548635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.078 [2024-07-13 06:22:05.548768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.078 [2024-07-13 06:22:05.548795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.078 [2024-07-13 06:22:05.548810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.078 [2024-07-13 06:22:05.548829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.078 [2024-07-13 06:22:05.548881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.078 qpair failed and we were unable to recover it. 00:26:59.078 [2024-07-13 06:22:05.558654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.078 [2024-07-13 06:22:05.558777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.078 [2024-07-13 06:22:05.558813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.078 [2024-07-13 06:22:05.558829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.078 [2024-07-13 06:22:05.558843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.078 [2024-07-13 06:22:05.558878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.078 qpair failed and we were unable to recover it. 00:26:59.078 [2024-07-13 06:22:05.568698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.078 [2024-07-13 06:22:05.568848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.078 [2024-07-13 06:22:05.568880] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.078 [2024-07-13 06:22:05.568897] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.078 [2024-07-13 06:22:05.568910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.078 [2024-07-13 06:22:05.568939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.078 qpair failed and we were unable to recover it. 00:26:59.078 [2024-07-13 06:22:05.578752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.078 [2024-07-13 06:22:05.578906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.078 [2024-07-13 06:22:05.578933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.078 [2024-07-13 06:22:05.578949] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.078 [2024-07-13 06:22:05.578962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.078 [2024-07-13 06:22:05.578990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.078 qpair failed and we were unable to recover it. 00:26:59.336 [2024-07-13 06:22:05.588746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.336 [2024-07-13 06:22:05.588892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.336 [2024-07-13 06:22:05.588923] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.336 [2024-07-13 06:22:05.588940] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.336 [2024-07-13 06:22:05.588953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.336 [2024-07-13 06:22:05.588983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.336 qpair failed and we were unable to recover it. 00:26:59.336 [2024-07-13 06:22:05.598810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.336 [2024-07-13 06:22:05.598971] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.336 [2024-07-13 06:22:05.598999] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.336 [2024-07-13 06:22:05.599015] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.336 [2024-07-13 06:22:05.599028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.336 [2024-07-13 06:22:05.599058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.336 qpair failed and we were unable to recover it. 00:26:59.336 [2024-07-13 06:22:05.608872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.336 [2024-07-13 06:22:05.609008] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.336 [2024-07-13 06:22:05.609034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.336 [2024-07-13 06:22:05.609050] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.336 [2024-07-13 06:22:05.609063] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.336 [2024-07-13 06:22:05.609092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.336 qpair failed and we were unable to recover it. 00:26:59.336 [2024-07-13 06:22:05.618815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.336 [2024-07-13 06:22:05.618948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.336 [2024-07-13 06:22:05.618976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.336 [2024-07-13 06:22:05.618991] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.336 [2024-07-13 06:22:05.619004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.336 [2024-07-13 06:22:05.619033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.336 qpair failed and we were unable to recover it. 00:26:59.336 [2024-07-13 06:22:05.628828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.336 [2024-07-13 06:22:05.628965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.336 [2024-07-13 06:22:05.628991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.336 [2024-07-13 06:22:05.629006] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.336 [2024-07-13 06:22:05.629020] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.336 [2024-07-13 06:22:05.629049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.337 qpair failed and we were unable to recover it. 00:26:59.337 [2024-07-13 06:22:05.638902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.337 [2024-07-13 06:22:05.639028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.337 [2024-07-13 06:22:05.639055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.337 [2024-07-13 06:22:05.639075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.337 [2024-07-13 06:22:05.639091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.337 [2024-07-13 06:22:05.639121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.337 qpair failed and we were unable to recover it. 00:26:59.337 [2024-07-13 06:22:05.648914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.337 [2024-07-13 06:22:05.649039] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.337 [2024-07-13 06:22:05.649064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.337 [2024-07-13 06:22:05.649080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.337 [2024-07-13 06:22:05.649093] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.337 [2024-07-13 06:22:05.649122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.337 qpair failed and we were unable to recover it. 00:26:59.337 [2024-07-13 06:22:05.658948] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.337 [2024-07-13 06:22:05.659072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.337 [2024-07-13 06:22:05.659099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.337 [2024-07-13 06:22:05.659114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.337 [2024-07-13 06:22:05.659127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.337 [2024-07-13 06:22:05.659155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.337 qpair failed and we were unable to recover it. 00:26:59.337 [2024-07-13 06:22:05.668955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.337 [2024-07-13 06:22:05.669075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.337 [2024-07-13 06:22:05.669101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.337 [2024-07-13 06:22:05.669116] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.337 [2024-07-13 06:22:05.669130] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.337 [2024-07-13 06:22:05.669161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.337 qpair failed and we were unable to recover it. 00:26:59.337 [2024-07-13 06:22:05.678993] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.337 [2024-07-13 06:22:05.679113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.337 [2024-07-13 06:22:05.679138] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.337 [2024-07-13 06:22:05.679153] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.337 [2024-07-13 06:22:05.679166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.337 [2024-07-13 06:22:05.679195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.337 qpair failed and we were unable to recover it. 00:26:59.337 [2024-07-13 06:22:05.689047] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.337 [2024-07-13 06:22:05.689161] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.337 [2024-07-13 06:22:05.689186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.337 [2024-07-13 06:22:05.689201] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.337 [2024-07-13 06:22:05.689215] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.337 [2024-07-13 06:22:05.689243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.337 qpair failed and we were unable to recover it. 00:26:59.337 [2024-07-13 06:22:05.699073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.337 [2024-07-13 06:22:05.699194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.337 [2024-07-13 06:22:05.699220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.337 [2024-07-13 06:22:05.699235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.337 [2024-07-13 06:22:05.699248] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.337 [2024-07-13 06:22:05.699277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.337 qpair failed and we were unable to recover it. 00:26:59.337 [2024-07-13 06:22:05.709159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.337 [2024-07-13 06:22:05.709285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.337 [2024-07-13 06:22:05.709310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.337 [2024-07-13 06:22:05.709326] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.337 [2024-07-13 06:22:05.709339] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.337 [2024-07-13 06:22:05.709385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.337 qpair failed and we were unable to recover it. 00:26:59.337 [2024-07-13 06:22:05.719133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.337 [2024-07-13 06:22:05.719274] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.337 [2024-07-13 06:22:05.719299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.337 [2024-07-13 06:22:05.719321] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.337 [2024-07-13 06:22:05.719334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.337 [2024-07-13 06:22:05.719363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.337 qpair failed and we were unable to recover it. 00:26:59.337 [2024-07-13 06:22:05.729161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.337 [2024-07-13 06:22:05.729291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.337 [2024-07-13 06:22:05.729316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.337 [2024-07-13 06:22:05.729337] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.337 [2024-07-13 06:22:05.729352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.337 [2024-07-13 06:22:05.729383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.337 qpair failed and we were unable to recover it. 00:26:59.337 [2024-07-13 06:22:05.739175] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.337 [2024-07-13 06:22:05.739300] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.337 [2024-07-13 06:22:05.739325] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.337 [2024-07-13 06:22:05.739339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.337 [2024-07-13 06:22:05.739352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.337 [2024-07-13 06:22:05.739380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.337 qpair failed and we were unable to recover it. 00:26:59.337 [2024-07-13 06:22:05.749225] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.337 [2024-07-13 06:22:05.749345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.337 [2024-07-13 06:22:05.749370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.337 [2024-07-13 06:22:05.749385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.337 [2024-07-13 06:22:05.749398] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.337 [2024-07-13 06:22:05.749426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.337 qpair failed and we were unable to recover it. 00:26:59.337 [2024-07-13 06:22:05.759212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.337 [2024-07-13 06:22:05.759345] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.337 [2024-07-13 06:22:05.759371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.337 [2024-07-13 06:22:05.759385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.337 [2024-07-13 06:22:05.759399] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.337 [2024-07-13 06:22:05.759427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.337 qpair failed and we were unable to recover it. 00:26:59.337 [2024-07-13 06:22:05.769239] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.337 [2024-07-13 06:22:05.769401] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.337 [2024-07-13 06:22:05.769426] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.337 [2024-07-13 06:22:05.769440] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.337 [2024-07-13 06:22:05.769453] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.337 [2024-07-13 06:22:05.769481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.337 qpair failed and we were unable to recover it. 00:26:59.338 [2024-07-13 06:22:05.779340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.338 [2024-07-13 06:22:05.779470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.338 [2024-07-13 06:22:05.779495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.338 [2024-07-13 06:22:05.779509] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.338 [2024-07-13 06:22:05.779523] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.338 [2024-07-13 06:22:05.779551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.338 qpair failed and we were unable to recover it. 00:26:59.338 [2024-07-13 06:22:05.789406] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.338 [2024-07-13 06:22:05.789555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.338 [2024-07-13 06:22:05.789580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.338 [2024-07-13 06:22:05.789595] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.338 [2024-07-13 06:22:05.789609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.338 [2024-07-13 06:22:05.789638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.338 qpair failed and we were unable to recover it. 00:26:59.338 [2024-07-13 06:22:05.799318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.338 [2024-07-13 06:22:05.799450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.338 [2024-07-13 06:22:05.799475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.338 [2024-07-13 06:22:05.799490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.338 [2024-07-13 06:22:05.799504] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.338 [2024-07-13 06:22:05.799532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.338 qpair failed and we were unable to recover it. 00:26:59.338 [2024-07-13 06:22:05.809342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.338 [2024-07-13 06:22:05.809460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.338 [2024-07-13 06:22:05.809486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.338 [2024-07-13 06:22:05.809501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.338 [2024-07-13 06:22:05.809514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.338 [2024-07-13 06:22:05.809543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.338 qpair failed and we were unable to recover it. 00:26:59.338 [2024-07-13 06:22:05.819411] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.338 [2024-07-13 06:22:05.819532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.338 [2024-07-13 06:22:05.819557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.338 [2024-07-13 06:22:05.819577] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.338 [2024-07-13 06:22:05.819592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.338 [2024-07-13 06:22:05.819622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.338 qpair failed and we were unable to recover it. 00:26:59.338 [2024-07-13 06:22:05.829437] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.338 [2024-07-13 06:22:05.829570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.338 [2024-07-13 06:22:05.829595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.338 [2024-07-13 06:22:05.829610] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.338 [2024-07-13 06:22:05.829622] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.338 [2024-07-13 06:22:05.829653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.338 qpair failed and we were unable to recover it. 00:26:59.338 [2024-07-13 06:22:05.839465] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.338 [2024-07-13 06:22:05.839587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.338 [2024-07-13 06:22:05.839612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.338 [2024-07-13 06:22:05.839626] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.338 [2024-07-13 06:22:05.839640] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.338 [2024-07-13 06:22:05.839668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.338 qpair failed and we were unable to recover it. 00:26:59.596 [2024-07-13 06:22:05.849493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.596 [2024-07-13 06:22:05.849617] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.596 [2024-07-13 06:22:05.849645] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.596 [2024-07-13 06:22:05.849660] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.596 [2024-07-13 06:22:05.849674] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.596 [2024-07-13 06:22:05.849703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-07-13 06:22:05.859508] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.596 [2024-07-13 06:22:05.859628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.596 [2024-07-13 06:22:05.859656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.596 [2024-07-13 06:22:05.859671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.596 [2024-07-13 06:22:05.859684] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.596 [2024-07-13 06:22:05.859713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-07-13 06:22:05.869543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.596 [2024-07-13 06:22:05.869664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.596 [2024-07-13 06:22:05.869690] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.596 [2024-07-13 06:22:05.869706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.596 [2024-07-13 06:22:05.869719] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.596 [2024-07-13 06:22:05.869750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-07-13 06:22:05.879558] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.596 [2024-07-13 06:22:05.879666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.596 [2024-07-13 06:22:05.879692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.596 [2024-07-13 06:22:05.879706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.596 [2024-07-13 06:22:05.879720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.596 [2024-07-13 06:22:05.879748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-07-13 06:22:05.889577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.596 [2024-07-13 06:22:05.889737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.596 [2024-07-13 06:22:05.889763] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.596 [2024-07-13 06:22:05.889778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.596 [2024-07-13 06:22:05.889791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.596 [2024-07-13 06:22:05.889819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-07-13 06:22:05.899709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.596 [2024-07-13 06:22:05.899832] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.596 [2024-07-13 06:22:05.899857] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.596 [2024-07-13 06:22:05.899879] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.596 [2024-07-13 06:22:05.899893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.596 [2024-07-13 06:22:05.899922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-07-13 06:22:05.909694] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.596 [2024-07-13 06:22:05.909816] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.596 [2024-07-13 06:22:05.909841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.596 [2024-07-13 06:22:05.909862] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.596 [2024-07-13 06:22:05.909885] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.596 [2024-07-13 06:22:05.909915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-07-13 06:22:05.919665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.596 [2024-07-13 06:22:05.919796] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.596 [2024-07-13 06:22:05.919823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.596 [2024-07-13 06:22:05.919838] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.596 [2024-07-13 06:22:05.919852] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.596 [2024-07-13 06:22:05.919886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-07-13 06:22:05.929706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.596 [2024-07-13 06:22:05.929825] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.596 [2024-07-13 06:22:05.929850] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.596 [2024-07-13 06:22:05.929872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.596 [2024-07-13 06:22:05.929888] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.596 [2024-07-13 06:22:05.929916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-07-13 06:22:05.939823] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.596 [2024-07-13 06:22:05.939957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.596 [2024-07-13 06:22:05.939983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.596 [2024-07-13 06:22:05.939998] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.596 [2024-07-13 06:22:05.940011] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.596 [2024-07-13 06:22:05.940040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-07-13 06:22:05.949758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.596 [2024-07-13 06:22:05.949879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.596 [2024-07-13 06:22:05.949905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.596 [2024-07-13 06:22:05.949920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.596 [2024-07-13 06:22:05.949933] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.596 [2024-07-13 06:22:05.949961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-07-13 06:22:05.959777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.596 [2024-07-13 06:22:05.959940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.596 [2024-07-13 06:22:05.959966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.596 [2024-07-13 06:22:05.959981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.596 [2024-07-13 06:22:05.959995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.596 [2024-07-13 06:22:05.960023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.596 qpair failed and we were unable to recover it. 00:26:59.596 [2024-07-13 06:22:05.969846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.596 [2024-07-13 06:22:05.969972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.597 [2024-07-13 06:22:05.969997] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.597 [2024-07-13 06:22:05.970012] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.597 [2024-07-13 06:22:05.970025] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.597 [2024-07-13 06:22:05.970053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-07-13 06:22:05.979917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.597 [2024-07-13 06:22:05.980055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.597 [2024-07-13 06:22:05.980080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.597 [2024-07-13 06:22:05.980095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.597 [2024-07-13 06:22:05.980109] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.597 [2024-07-13 06:22:05.980137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-07-13 06:22:05.989905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.597 [2024-07-13 06:22:05.990044] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.597 [2024-07-13 06:22:05.990068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.597 [2024-07-13 06:22:05.990083] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.597 [2024-07-13 06:22:05.990096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.597 [2024-07-13 06:22:05.990125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-07-13 06:22:05.999936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.597 [2024-07-13 06:22:06.000063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.597 [2024-07-13 06:22:06.000093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.597 [2024-07-13 06:22:06.000109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.597 [2024-07-13 06:22:06.000122] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.597 [2024-07-13 06:22:06.000150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-07-13 06:22:06.009946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.597 [2024-07-13 06:22:06.010062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.597 [2024-07-13 06:22:06.010088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.597 [2024-07-13 06:22:06.010102] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.597 [2024-07-13 06:22:06.010115] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.597 [2024-07-13 06:22:06.010143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-07-13 06:22:06.019995] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.597 [2024-07-13 06:22:06.020121] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.597 [2024-07-13 06:22:06.020146] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.597 [2024-07-13 06:22:06.020161] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.597 [2024-07-13 06:22:06.020174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.597 [2024-07-13 06:22:06.020202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-07-13 06:22:06.030035] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.597 [2024-07-13 06:22:06.030172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.597 [2024-07-13 06:22:06.030196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.597 [2024-07-13 06:22:06.030211] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.597 [2024-07-13 06:22:06.030224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.597 [2024-07-13 06:22:06.030253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-07-13 06:22:06.040065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.597 [2024-07-13 06:22:06.040219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.597 [2024-07-13 06:22:06.040244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.597 [2024-07-13 06:22:06.040258] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.597 [2024-07-13 06:22:06.040272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.597 [2024-07-13 06:22:06.040299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-07-13 06:22:06.050073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.597 [2024-07-13 06:22:06.050249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.597 [2024-07-13 06:22:06.050274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.597 [2024-07-13 06:22:06.050289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.597 [2024-07-13 06:22:06.050302] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.597 [2024-07-13 06:22:06.050330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-07-13 06:22:06.060086] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.597 [2024-07-13 06:22:06.060205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.597 [2024-07-13 06:22:06.060231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.597 [2024-07-13 06:22:06.060245] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.597 [2024-07-13 06:22:06.060258] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.597 [2024-07-13 06:22:06.060286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-07-13 06:22:06.070106] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.597 [2024-07-13 06:22:06.070280] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.597 [2024-07-13 06:22:06.070306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.597 [2024-07-13 06:22:06.070320] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.597 [2024-07-13 06:22:06.070334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.597 [2024-07-13 06:22:06.070362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-07-13 06:22:06.080162] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.597 [2024-07-13 06:22:06.080282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.597 [2024-07-13 06:22:06.080307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.597 [2024-07-13 06:22:06.080321] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.597 [2024-07-13 06:22:06.080334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.597 [2024-07-13 06:22:06.080362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-07-13 06:22:06.090143] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.597 [2024-07-13 06:22:06.090261] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.597 [2024-07-13 06:22:06.090291] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.597 [2024-07-13 06:22:06.090307] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.597 [2024-07-13 06:22:06.090320] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.597 [2024-07-13 06:22:06.090348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.597 [2024-07-13 06:22:06.100204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.597 [2024-07-13 06:22:06.100332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.597 [2024-07-13 06:22:06.100358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.597 [2024-07-13 06:22:06.100373] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.597 [2024-07-13 06:22:06.100386] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.597 [2024-07-13 06:22:06.100413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.597 qpair failed and we were unable to recover it. 00:26:59.856 [2024-07-13 06:22:06.110241] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.856 [2024-07-13 06:22:06.110374] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.856 [2024-07-13 06:22:06.110402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.856 [2024-07-13 06:22:06.110417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.856 [2024-07-13 06:22:06.110431] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.856 [2024-07-13 06:22:06.110464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.856 qpair failed and we were unable to recover it. 00:26:59.856 [2024-07-13 06:22:06.120260] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.856 [2024-07-13 06:22:06.120378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.856 [2024-07-13 06:22:06.120404] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.856 [2024-07-13 06:22:06.120419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.856 [2024-07-13 06:22:06.120433] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.856 [2024-07-13 06:22:06.120463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.856 qpair failed and we were unable to recover it. 00:26:59.856 [2024-07-13 06:22:06.130313] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.856 [2024-07-13 06:22:06.130459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.856 [2024-07-13 06:22:06.130484] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.856 [2024-07-13 06:22:06.130500] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.856 [2024-07-13 06:22:06.130513] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.856 [2024-07-13 06:22:06.130547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.856 qpair failed and we were unable to recover it. 00:26:59.856 [2024-07-13 06:22:06.140297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.856 [2024-07-13 06:22:06.140417] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.856 [2024-07-13 06:22:06.140442] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.856 [2024-07-13 06:22:06.140457] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.856 [2024-07-13 06:22:06.140470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.856 [2024-07-13 06:22:06.140498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.856 qpair failed and we were unable to recover it. 00:26:59.856 [2024-07-13 06:22:06.150320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.856 [2024-07-13 06:22:06.150440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.856 [2024-07-13 06:22:06.150465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.856 [2024-07-13 06:22:06.150480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.856 [2024-07-13 06:22:06.150493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.856 [2024-07-13 06:22:06.150521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.856 qpair failed and we were unable to recover it. 00:26:59.856 [2024-07-13 06:22:06.160348] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.856 [2024-07-13 06:22:06.160475] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.856 [2024-07-13 06:22:06.160501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.856 [2024-07-13 06:22:06.160516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.856 [2024-07-13 06:22:06.160530] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.856 [2024-07-13 06:22:06.160558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.856 qpair failed and we were unable to recover it. 00:26:59.856 [2024-07-13 06:22:06.170404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.856 [2024-07-13 06:22:06.170519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.856 [2024-07-13 06:22:06.170544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.857 [2024-07-13 06:22:06.170559] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.857 [2024-07-13 06:22:06.170572] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.857 [2024-07-13 06:22:06.170602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.857 qpair failed and we were unable to recover it. 00:26:59.857 [2024-07-13 06:22:06.180477] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.857 [2024-07-13 06:22:06.180640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.857 [2024-07-13 06:22:06.180673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.857 [2024-07-13 06:22:06.180688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.857 [2024-07-13 06:22:06.180701] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.857 [2024-07-13 06:22:06.180729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.857 qpair failed and we were unable to recover it. 00:26:59.857 [2024-07-13 06:22:06.190461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.857 [2024-07-13 06:22:06.190582] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.857 [2024-07-13 06:22:06.190607] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.857 [2024-07-13 06:22:06.190622] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.857 [2024-07-13 06:22:06.190635] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.857 [2024-07-13 06:22:06.190663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.857 qpair failed and we were unable to recover it. 00:26:59.857 [2024-07-13 06:22:06.200487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.857 [2024-07-13 06:22:06.200605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.857 [2024-07-13 06:22:06.200631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.857 [2024-07-13 06:22:06.200645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.857 [2024-07-13 06:22:06.200659] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.857 [2024-07-13 06:22:06.200686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.857 qpair failed and we were unable to recover it. 00:26:59.857 [2024-07-13 06:22:06.210500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.857 [2024-07-13 06:22:06.210666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.857 [2024-07-13 06:22:06.210691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.857 [2024-07-13 06:22:06.210705] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.857 [2024-07-13 06:22:06.210718] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.857 [2024-07-13 06:22:06.210746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.857 qpair failed and we were unable to recover it. 00:26:59.857 [2024-07-13 06:22:06.220628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.857 [2024-07-13 06:22:06.220753] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.857 [2024-07-13 06:22:06.220778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.857 [2024-07-13 06:22:06.220793] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.857 [2024-07-13 06:22:06.220806] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.857 [2024-07-13 06:22:06.220840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.857 qpair failed and we were unable to recover it. 00:26:59.857 [2024-07-13 06:22:06.230597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.857 [2024-07-13 06:22:06.230723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.857 [2024-07-13 06:22:06.230746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.857 [2024-07-13 06:22:06.230761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.857 [2024-07-13 06:22:06.230773] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.857 [2024-07-13 06:22:06.230801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.857 qpair failed and we were unable to recover it. 00:26:59.857 [2024-07-13 06:22:06.240594] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.857 [2024-07-13 06:22:06.240717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.857 [2024-07-13 06:22:06.240742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.857 [2024-07-13 06:22:06.240757] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.857 [2024-07-13 06:22:06.240770] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.857 [2024-07-13 06:22:06.240801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.857 qpair failed and we were unable to recover it. 00:26:59.857 [2024-07-13 06:22:06.250644] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.857 [2024-07-13 06:22:06.250764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.857 [2024-07-13 06:22:06.250789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.857 [2024-07-13 06:22:06.250804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.857 [2024-07-13 06:22:06.250817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.857 [2024-07-13 06:22:06.250845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.857 qpair failed and we were unable to recover it. 00:26:59.857 [2024-07-13 06:22:06.260676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.857 [2024-07-13 06:22:06.260795] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.857 [2024-07-13 06:22:06.260820] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.857 [2024-07-13 06:22:06.260834] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.857 [2024-07-13 06:22:06.260847] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.857 [2024-07-13 06:22:06.260884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.857 qpair failed and we were unable to recover it. 00:26:59.857 [2024-07-13 06:22:06.270696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.857 [2024-07-13 06:22:06.270817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.857 [2024-07-13 06:22:06.270847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.857 [2024-07-13 06:22:06.270862] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.857 [2024-07-13 06:22:06.270883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.857 [2024-07-13 06:22:06.270912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.857 qpair failed and we were unable to recover it. 00:26:59.857 [2024-07-13 06:22:06.280734] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.857 [2024-07-13 06:22:06.280852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.857 [2024-07-13 06:22:06.280884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.857 [2024-07-13 06:22:06.280899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.857 [2024-07-13 06:22:06.280913] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.857 [2024-07-13 06:22:06.280940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.857 qpair failed and we were unable to recover it. 00:26:59.857 [2024-07-13 06:22:06.290725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.857 [2024-07-13 06:22:06.290875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.857 [2024-07-13 06:22:06.290900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.857 [2024-07-13 06:22:06.290915] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.857 [2024-07-13 06:22:06.290928] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.857 [2024-07-13 06:22:06.290956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.857 qpair failed and we were unable to recover it. 00:26:59.857 [2024-07-13 06:22:06.300784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.857 [2024-07-13 06:22:06.300911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.857 [2024-07-13 06:22:06.300937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.857 [2024-07-13 06:22:06.300951] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.857 [2024-07-13 06:22:06.300965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.857 [2024-07-13 06:22:06.300993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.857 qpair failed and we were unable to recover it. 00:26:59.857 [2024-07-13 06:22:06.310811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.857 [2024-07-13 06:22:06.310929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.857 [2024-07-13 06:22:06.310955] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.857 [2024-07-13 06:22:06.310969] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.857 [2024-07-13 06:22:06.310982] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.858 [2024-07-13 06:22:06.311016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.858 qpair failed and we were unable to recover it. 00:26:59.858 [2024-07-13 06:22:06.320923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.858 [2024-07-13 06:22:06.321040] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.858 [2024-07-13 06:22:06.321065] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.858 [2024-07-13 06:22:06.321080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.858 [2024-07-13 06:22:06.321093] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.858 [2024-07-13 06:22:06.321120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.858 qpair failed and we were unable to recover it. 00:26:59.858 [2024-07-13 06:22:06.330837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.858 [2024-07-13 06:22:06.330966] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.858 [2024-07-13 06:22:06.330990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.858 [2024-07-13 06:22:06.331005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.858 [2024-07-13 06:22:06.331018] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.858 [2024-07-13 06:22:06.331046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.858 qpair failed and we were unable to recover it. 00:26:59.858 [2024-07-13 06:22:06.340928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.858 [2024-07-13 06:22:06.341076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.858 [2024-07-13 06:22:06.341101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.858 [2024-07-13 06:22:06.341115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.858 [2024-07-13 06:22:06.341128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.858 [2024-07-13 06:22:06.341157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.858 qpair failed and we were unable to recover it. 00:26:59.858 [2024-07-13 06:22:06.350947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.858 [2024-07-13 06:22:06.351104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.858 [2024-07-13 06:22:06.351130] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.858 [2024-07-13 06:22:06.351144] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.858 [2024-07-13 06:22:06.351158] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.858 [2024-07-13 06:22:06.351185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.858 qpair failed and we were unable to recover it. 00:26:59.858 [2024-07-13 06:22:06.360951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:59.858 [2024-07-13 06:22:06.361077] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:59.858 [2024-07-13 06:22:06.361108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:59.858 [2024-07-13 06:22:06.361123] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:59.858 [2024-07-13 06:22:06.361135] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:26:59.858 [2024-07-13 06:22:06.361163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:59.858 qpair failed and we were unable to recover it. 00:27:00.117 [2024-07-13 06:22:06.370974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.117 [2024-07-13 06:22:06.371097] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.117 [2024-07-13 06:22:06.371124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.117 [2024-07-13 06:22:06.371140] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.117 [2024-07-13 06:22:06.371153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.117 [2024-07-13 06:22:06.371182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.117 qpair failed and we were unable to recover it. 00:27:00.117 [2024-07-13 06:22:06.381104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.117 [2024-07-13 06:22:06.381225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.117 [2024-07-13 06:22:06.381251] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.117 [2024-07-13 06:22:06.381266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.117 [2024-07-13 06:22:06.381279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.117 [2024-07-13 06:22:06.381307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.117 qpair failed and we were unable to recover it. 00:27:00.117 [2024-07-13 06:22:06.391071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.117 [2024-07-13 06:22:06.391198] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.117 [2024-07-13 06:22:06.391223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.117 [2024-07-13 06:22:06.391238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.117 [2024-07-13 06:22:06.391251] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.117 [2024-07-13 06:22:06.391279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.117 qpair failed and we were unable to recover it. 00:27:00.117 [2024-07-13 06:22:06.401067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.117 [2024-07-13 06:22:06.401190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.117 [2024-07-13 06:22:06.401216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.117 [2024-07-13 06:22:06.401230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.117 [2024-07-13 06:22:06.401244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.117 [2024-07-13 06:22:06.401277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.117 qpair failed and we were unable to recover it. 00:27:00.117 [2024-07-13 06:22:06.411132] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.117 [2024-07-13 06:22:06.411275] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.117 [2024-07-13 06:22:06.411301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.117 [2024-07-13 06:22:06.411316] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.117 [2024-07-13 06:22:06.411330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.117 [2024-07-13 06:22:06.411357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.117 qpair failed and we were unable to recover it. 00:27:00.117 [2024-07-13 06:22:06.421131] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.117 [2024-07-13 06:22:06.421271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.117 [2024-07-13 06:22:06.421297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.117 [2024-07-13 06:22:06.421312] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.117 [2024-07-13 06:22:06.421325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.117 [2024-07-13 06:22:06.421353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.117 qpair failed and we were unable to recover it. 00:27:00.117 [2024-07-13 06:22:06.431211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.117 [2024-07-13 06:22:06.431333] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.117 [2024-07-13 06:22:06.431358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.117 [2024-07-13 06:22:06.431373] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.117 [2024-07-13 06:22:06.431387] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.117 [2024-07-13 06:22:06.431414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.117 qpair failed and we were unable to recover it. 00:27:00.117 [2024-07-13 06:22:06.441197] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.117 [2024-07-13 06:22:06.441335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.117 [2024-07-13 06:22:06.441362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.117 [2024-07-13 06:22:06.441377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.117 [2024-07-13 06:22:06.441394] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.117 [2024-07-13 06:22:06.441424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.117 qpair failed and we were unable to recover it. 00:27:00.117 [2024-07-13 06:22:06.451214] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.117 [2024-07-13 06:22:06.451378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.117 [2024-07-13 06:22:06.451409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.117 [2024-07-13 06:22:06.451424] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.117 [2024-07-13 06:22:06.451437] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.117 [2024-07-13 06:22:06.451466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.117 qpair failed and we were unable to recover it. 00:27:00.117 [2024-07-13 06:22:06.461212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.117 [2024-07-13 06:22:06.461331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.117 [2024-07-13 06:22:06.461356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.117 [2024-07-13 06:22:06.461371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.117 [2024-07-13 06:22:06.461384] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.117 [2024-07-13 06:22:06.461412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.117 qpair failed and we were unable to recover it. 00:27:00.117 [2024-07-13 06:22:06.471257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.117 [2024-07-13 06:22:06.471383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.117 [2024-07-13 06:22:06.471409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.117 [2024-07-13 06:22:06.471427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.117 [2024-07-13 06:22:06.471442] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.118 [2024-07-13 06:22:06.471472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.118 qpair failed and we were unable to recover it. 00:27:00.118 [2024-07-13 06:22:06.481307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.118 [2024-07-13 06:22:06.481435] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.118 [2024-07-13 06:22:06.481461] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.118 [2024-07-13 06:22:06.481476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.118 [2024-07-13 06:22:06.481489] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.118 [2024-07-13 06:22:06.481516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.118 qpair failed and we were unable to recover it. 00:27:00.118 [2024-07-13 06:22:06.491383] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.118 [2024-07-13 06:22:06.491498] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.118 [2024-07-13 06:22:06.491522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.118 [2024-07-13 06:22:06.491537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.118 [2024-07-13 06:22:06.491556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.118 [2024-07-13 06:22:06.491584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.118 qpair failed and we were unable to recover it. 00:27:00.118 [2024-07-13 06:22:06.501415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.118 [2024-07-13 06:22:06.501560] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.118 [2024-07-13 06:22:06.501585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.118 [2024-07-13 06:22:06.501600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.118 [2024-07-13 06:22:06.501614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.118 [2024-07-13 06:22:06.501642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.118 qpair failed and we were unable to recover it. 00:27:00.118 [2024-07-13 06:22:06.511347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.118 [2024-07-13 06:22:06.511468] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.118 [2024-07-13 06:22:06.511492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.118 [2024-07-13 06:22:06.511507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.118 [2024-07-13 06:22:06.511520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.118 [2024-07-13 06:22:06.511548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.118 qpair failed and we were unable to recover it. 00:27:00.118 [2024-07-13 06:22:06.521384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.118 [2024-07-13 06:22:06.521504] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.118 [2024-07-13 06:22:06.521529] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.118 [2024-07-13 06:22:06.521544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.118 [2024-07-13 06:22:06.521557] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.118 [2024-07-13 06:22:06.521585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.118 qpair failed and we were unable to recover it. 00:27:00.118 [2024-07-13 06:22:06.531423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.118 [2024-07-13 06:22:06.531545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.118 [2024-07-13 06:22:06.531569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.118 [2024-07-13 06:22:06.531584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.118 [2024-07-13 06:22:06.531597] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.118 [2024-07-13 06:22:06.531625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.118 qpair failed and we were unable to recover it. 00:27:00.118 [2024-07-13 06:22:06.541532] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.118 [2024-07-13 06:22:06.541657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.118 [2024-07-13 06:22:06.541682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.118 [2024-07-13 06:22:06.541697] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.118 [2024-07-13 06:22:06.541711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.118 [2024-07-13 06:22:06.541739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.118 qpair failed and we were unable to recover it. 00:27:00.118 [2024-07-13 06:22:06.551476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.118 [2024-07-13 06:22:06.551609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.118 [2024-07-13 06:22:06.551634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.118 [2024-07-13 06:22:06.551648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.118 [2024-07-13 06:22:06.551661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.118 [2024-07-13 06:22:06.551689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.118 qpair failed and we were unable to recover it. 00:27:00.118 [2024-07-13 06:22:06.561495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.118 [2024-07-13 06:22:06.561654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.118 [2024-07-13 06:22:06.561679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.118 [2024-07-13 06:22:06.561694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.118 [2024-07-13 06:22:06.561708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.118 [2024-07-13 06:22:06.561735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.118 qpair failed and we were unable to recover it. 00:27:00.118 [2024-07-13 06:22:06.571556] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.118 [2024-07-13 06:22:06.571675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.118 [2024-07-13 06:22:06.571700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.118 [2024-07-13 06:22:06.571715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.118 [2024-07-13 06:22:06.571728] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.118 [2024-07-13 06:22:06.571757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.118 qpair failed and we were unable to recover it. 00:27:00.118 [2024-07-13 06:22:06.581572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.118 [2024-07-13 06:22:06.581710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.118 [2024-07-13 06:22:06.581738] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.118 [2024-07-13 06:22:06.581756] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.118 [2024-07-13 06:22:06.581776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.118 [2024-07-13 06:22:06.581806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.118 qpair failed and we were unable to recover it. 00:27:00.118 [2024-07-13 06:22:06.591593] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.118 [2024-07-13 06:22:06.591723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.118 [2024-07-13 06:22:06.591749] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.118 [2024-07-13 06:22:06.591764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.118 [2024-07-13 06:22:06.591778] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.118 [2024-07-13 06:22:06.591805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.118 qpair failed and we were unable to recover it. 00:27:00.118 [2024-07-13 06:22:06.601615] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.118 [2024-07-13 06:22:06.601737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.118 [2024-07-13 06:22:06.601763] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.118 [2024-07-13 06:22:06.601777] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.118 [2024-07-13 06:22:06.601791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.118 [2024-07-13 06:22:06.601818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.118 qpair failed and we were unable to recover it. 00:27:00.118 [2024-07-13 06:22:06.611637] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.118 [2024-07-13 06:22:06.611768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.118 [2024-07-13 06:22:06.611793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.118 [2024-07-13 06:22:06.611808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.118 [2024-07-13 06:22:06.611821] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.118 [2024-07-13 06:22:06.611849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.118 qpair failed and we were unable to recover it. 00:27:00.118 [2024-07-13 06:22:06.621684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.119 [2024-07-13 06:22:06.621804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.119 [2024-07-13 06:22:06.621830] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.119 [2024-07-13 06:22:06.621844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.119 [2024-07-13 06:22:06.621857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.119 [2024-07-13 06:22:06.621891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.119 qpair failed and we were unable to recover it. 00:27:00.378 [2024-07-13 06:22:06.631720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.378 [2024-07-13 06:22:06.631899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.378 [2024-07-13 06:22:06.631932] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.378 [2024-07-13 06:22:06.631959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.378 [2024-07-13 06:22:06.631974] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.378 [2024-07-13 06:22:06.632005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.378 qpair failed and we were unable to recover it. 00:27:00.378 [2024-07-13 06:22:06.641738] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.378 [2024-07-13 06:22:06.641861] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.378 [2024-07-13 06:22:06.641894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.378 [2024-07-13 06:22:06.641909] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.378 [2024-07-13 06:22:06.641922] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.378 [2024-07-13 06:22:06.641951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.378 qpair failed and we were unable to recover it. 00:27:00.378 [2024-07-13 06:22:06.651798] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.378 [2024-07-13 06:22:06.651960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.378 [2024-07-13 06:22:06.651987] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.378 [2024-07-13 06:22:06.652002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.378 [2024-07-13 06:22:06.652015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.378 [2024-07-13 06:22:06.652043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.378 qpair failed and we were unable to recover it. 00:27:00.378 [2024-07-13 06:22:06.661837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.378 [2024-07-13 06:22:06.661974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.378 [2024-07-13 06:22:06.661999] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.378 [2024-07-13 06:22:06.662014] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.378 [2024-07-13 06:22:06.662028] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.378 [2024-07-13 06:22:06.662056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.378 qpair failed and we were unable to recover it. 00:27:00.378 [2024-07-13 06:22:06.671846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.378 [2024-07-13 06:22:06.671996] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.378 [2024-07-13 06:22:06.672023] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.378 [2024-07-13 06:22:06.672037] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.378 [2024-07-13 06:22:06.672056] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.378 [2024-07-13 06:22:06.672085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.378 qpair failed and we were unable to recover it. 00:27:00.378 [2024-07-13 06:22:06.681848] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.378 [2024-07-13 06:22:06.681977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.378 [2024-07-13 06:22:06.682002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.378 [2024-07-13 06:22:06.682017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.378 [2024-07-13 06:22:06.682030] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.378 [2024-07-13 06:22:06.682058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.378 qpair failed and we were unable to recover it. 00:27:00.378 [2024-07-13 06:22:06.691939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.378 [2024-07-13 06:22:06.692062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.379 [2024-07-13 06:22:06.692088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.379 [2024-07-13 06:22:06.692102] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.379 [2024-07-13 06:22:06.692115] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.379 [2024-07-13 06:22:06.692145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.379 qpair failed and we were unable to recover it. 00:27:00.379 [2024-07-13 06:22:06.701951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.379 [2024-07-13 06:22:06.702116] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.379 [2024-07-13 06:22:06.702141] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.379 [2024-07-13 06:22:06.702156] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.379 [2024-07-13 06:22:06.702169] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.379 [2024-07-13 06:22:06.702197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.379 qpair failed and we were unable to recover it. 00:27:00.379 [2024-07-13 06:22:06.711946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.379 [2024-07-13 06:22:06.712068] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.379 [2024-07-13 06:22:06.712094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.379 [2024-07-13 06:22:06.712108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.379 [2024-07-13 06:22:06.712122] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.379 [2024-07-13 06:22:06.712150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.379 qpair failed and we were unable to recover it. 00:27:00.379 [2024-07-13 06:22:06.722004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.379 [2024-07-13 06:22:06.722147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.379 [2024-07-13 06:22:06.722173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.379 [2024-07-13 06:22:06.722188] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.379 [2024-07-13 06:22:06.722201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.379 [2024-07-13 06:22:06.722229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.379 qpair failed and we were unable to recover it. 00:27:00.379 [2024-07-13 06:22:06.732023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.379 [2024-07-13 06:22:06.732145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.379 [2024-07-13 06:22:06.732170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.379 [2024-07-13 06:22:06.732185] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.379 [2024-07-13 06:22:06.732198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.379 [2024-07-13 06:22:06.732226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.379 qpair failed and we were unable to recover it. 00:27:00.379 [2024-07-13 06:22:06.742093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.379 [2024-07-13 06:22:06.742215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.379 [2024-07-13 06:22:06.742240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.379 [2024-07-13 06:22:06.742255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.379 [2024-07-13 06:22:06.742269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.379 [2024-07-13 06:22:06.742297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.379 qpair failed and we were unable to recover it. 00:27:00.379 [2024-07-13 06:22:06.752090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.379 [2024-07-13 06:22:06.752264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.379 [2024-07-13 06:22:06.752290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.379 [2024-07-13 06:22:06.752304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.379 [2024-07-13 06:22:06.752317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.379 [2024-07-13 06:22:06.752345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.379 qpair failed and we were unable to recover it. 00:27:00.379 [2024-07-13 06:22:06.762081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.379 [2024-07-13 06:22:06.762197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.379 [2024-07-13 06:22:06.762221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.379 [2024-07-13 06:22:06.762236] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.379 [2024-07-13 06:22:06.762255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.379 [2024-07-13 06:22:06.762283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.379 qpair failed and we were unable to recover it. 00:27:00.379 [2024-07-13 06:22:06.772136] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.379 [2024-07-13 06:22:06.772270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.379 [2024-07-13 06:22:06.772296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.379 [2024-07-13 06:22:06.772311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.379 [2024-07-13 06:22:06.772324] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.379 [2024-07-13 06:22:06.772351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.379 qpair failed and we were unable to recover it. 00:27:00.379 [2024-07-13 06:22:06.782230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.379 [2024-07-13 06:22:06.782378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.379 [2024-07-13 06:22:06.782404] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.379 [2024-07-13 06:22:06.782419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.379 [2024-07-13 06:22:06.782432] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.379 [2024-07-13 06:22:06.782460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.379 qpair failed and we were unable to recover it. 00:27:00.379 [2024-07-13 06:22:06.792189] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.379 [2024-07-13 06:22:06.792311] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.379 [2024-07-13 06:22:06.792336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.379 [2024-07-13 06:22:06.792351] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.379 [2024-07-13 06:22:06.792364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.379 [2024-07-13 06:22:06.792394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.379 qpair failed and we were unable to recover it. 00:27:00.379 [2024-07-13 06:22:06.802227] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.379 [2024-07-13 06:22:06.802347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.379 [2024-07-13 06:22:06.802372] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.379 [2024-07-13 06:22:06.802387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.379 [2024-07-13 06:22:06.802400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.379 [2024-07-13 06:22:06.802428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.379 qpair failed and we were unable to recover it. 00:27:00.379 [2024-07-13 06:22:06.812336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.379 [2024-07-13 06:22:06.812477] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.379 [2024-07-13 06:22:06.812503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.379 [2024-07-13 06:22:06.812517] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.379 [2024-07-13 06:22:06.812530] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.379 [2024-07-13 06:22:06.812558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.379 qpair failed and we were unable to recover it. 00:27:00.379 [2024-07-13 06:22:06.822280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.379 [2024-07-13 06:22:06.822402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.379 [2024-07-13 06:22:06.822427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.379 [2024-07-13 06:22:06.822442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.379 [2024-07-13 06:22:06.822455] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.379 [2024-07-13 06:22:06.822483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.379 qpair failed and we were unable to recover it. 00:27:00.379 [2024-07-13 06:22:06.832308] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.379 [2024-07-13 06:22:06.832432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.379 [2024-07-13 06:22:06.832458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.379 [2024-07-13 06:22:06.832473] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.380 [2024-07-13 06:22:06.832486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.380 [2024-07-13 06:22:06.832514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.380 qpair failed and we were unable to recover it. 00:27:00.380 [2024-07-13 06:22:06.842297] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.380 [2024-07-13 06:22:06.842443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.380 [2024-07-13 06:22:06.842468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.380 [2024-07-13 06:22:06.842483] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.380 [2024-07-13 06:22:06.842496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.380 [2024-07-13 06:22:06.842523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.380 qpair failed and we were unable to recover it. 00:27:00.380 [2024-07-13 06:22:06.852413] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.380 [2024-07-13 06:22:06.852533] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.380 [2024-07-13 06:22:06.852558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.380 [2024-07-13 06:22:06.852578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.380 [2024-07-13 06:22:06.852592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.380 [2024-07-13 06:22:06.852622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.380 qpair failed and we were unable to recover it. 00:27:00.380 [2024-07-13 06:22:06.862425] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.380 [2024-07-13 06:22:06.862546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.380 [2024-07-13 06:22:06.862572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.380 [2024-07-13 06:22:06.862586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.380 [2024-07-13 06:22:06.862600] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.380 [2024-07-13 06:22:06.862628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.380 qpair failed and we were unable to recover it. 00:27:00.380 [2024-07-13 06:22:06.872423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.380 [2024-07-13 06:22:06.872537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.380 [2024-07-13 06:22:06.872562] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.380 [2024-07-13 06:22:06.872577] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.380 [2024-07-13 06:22:06.872590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.380 [2024-07-13 06:22:06.872618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.380 qpair failed and we were unable to recover it. 00:27:00.380 [2024-07-13 06:22:06.882526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.380 [2024-07-13 06:22:06.882651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.380 [2024-07-13 06:22:06.882676] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.380 [2024-07-13 06:22:06.882690] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.380 [2024-07-13 06:22:06.882704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.380 [2024-07-13 06:22:06.882731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.380 qpair failed and we were unable to recover it. 00:27:00.639 [2024-07-13 06:22:06.892481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.639 [2024-07-13 06:22:06.892609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.639 [2024-07-13 06:22:06.892636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.639 [2024-07-13 06:22:06.892652] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.639 [2024-07-13 06:22:06.892665] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.639 [2024-07-13 06:22:06.892699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.639 qpair failed and we were unable to recover it. 00:27:00.639 [2024-07-13 06:22:06.902483] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.639 [2024-07-13 06:22:06.902609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.639 [2024-07-13 06:22:06.902636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.639 [2024-07-13 06:22:06.902650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.639 [2024-07-13 06:22:06.902664] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.639 [2024-07-13 06:22:06.902693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.639 qpair failed and we were unable to recover it. 00:27:00.639 [2024-07-13 06:22:06.912517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.639 [2024-07-13 06:22:06.912645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.639 [2024-07-13 06:22:06.912670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.639 [2024-07-13 06:22:06.912685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.639 [2024-07-13 06:22:06.912698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.639 [2024-07-13 06:22:06.912726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.639 qpair failed and we were unable to recover it. 00:27:00.639 [2024-07-13 06:22:06.922576] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.639 [2024-07-13 06:22:06.922699] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.639 [2024-07-13 06:22:06.922725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.639 [2024-07-13 06:22:06.922740] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.639 [2024-07-13 06:22:06.922753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.639 [2024-07-13 06:22:06.922781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.639 qpair failed and we were unable to recover it. 00:27:00.639 [2024-07-13 06:22:06.932605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.639 [2024-07-13 06:22:06.932728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.639 [2024-07-13 06:22:06.932754] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.639 [2024-07-13 06:22:06.932769] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.639 [2024-07-13 06:22:06.932782] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.639 [2024-07-13 06:22:06.932810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.639 qpair failed and we were unable to recover it. 00:27:00.639 [2024-07-13 06:22:06.942661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.639 [2024-07-13 06:22:06.942783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.639 [2024-07-13 06:22:06.942808] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.639 [2024-07-13 06:22:06.942828] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.639 [2024-07-13 06:22:06.942842] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.639 [2024-07-13 06:22:06.942876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.639 qpair failed and we were unable to recover it. 00:27:00.639 [2024-07-13 06:22:06.952641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.639 [2024-07-13 06:22:06.952809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.639 [2024-07-13 06:22:06.952835] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.639 [2024-07-13 06:22:06.952850] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.639 [2024-07-13 06:22:06.952863] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.639 [2024-07-13 06:22:06.952902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.639 qpair failed and we were unable to recover it. 00:27:00.639 [2024-07-13 06:22:06.962672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.640 [2024-07-13 06:22:06.962799] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.640 [2024-07-13 06:22:06.962824] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.640 [2024-07-13 06:22:06.962839] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.640 [2024-07-13 06:22:06.962852] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.640 [2024-07-13 06:22:06.962889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.640 qpair failed and we were unable to recover it. 00:27:00.640 [2024-07-13 06:22:06.972719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.640 [2024-07-13 06:22:06.972855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.640 [2024-07-13 06:22:06.972886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.640 [2024-07-13 06:22:06.972901] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.640 [2024-07-13 06:22:06.972915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.640 [2024-07-13 06:22:06.972943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.640 qpair failed and we were unable to recover it. 00:27:00.640 [2024-07-13 06:22:06.982781] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.640 [2024-07-13 06:22:06.982946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.640 [2024-07-13 06:22:06.982971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.640 [2024-07-13 06:22:06.982986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.640 [2024-07-13 06:22:06.982999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.640 [2024-07-13 06:22:06.983027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.640 qpair failed and we were unable to recover it. 00:27:00.640 [2024-07-13 06:22:06.992803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.640 [2024-07-13 06:22:06.992937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.640 [2024-07-13 06:22:06.992963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.640 [2024-07-13 06:22:06.992977] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.640 [2024-07-13 06:22:06.992991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.640 [2024-07-13 06:22:06.993019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.640 qpair failed and we were unable to recover it. 00:27:00.640 [2024-07-13 06:22:07.002772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.640 [2024-07-13 06:22:07.002902] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.640 [2024-07-13 06:22:07.002928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.640 [2024-07-13 06:22:07.002943] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.640 [2024-07-13 06:22:07.002956] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.640 [2024-07-13 06:22:07.002985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.640 qpair failed and we were unable to recover it. 00:27:00.640 [2024-07-13 06:22:07.012800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.640 [2024-07-13 06:22:07.012922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.640 [2024-07-13 06:22:07.012948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.640 [2024-07-13 06:22:07.012962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.640 [2024-07-13 06:22:07.012976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.640 [2024-07-13 06:22:07.013004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.640 qpair failed and we were unable to recover it. 00:27:00.640 [2024-07-13 06:22:07.022838] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.640 [2024-07-13 06:22:07.023014] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.640 [2024-07-13 06:22:07.023040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.640 [2024-07-13 06:22:07.023055] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.640 [2024-07-13 06:22:07.023069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.640 [2024-07-13 06:22:07.023097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.640 qpair failed and we were unable to recover it. 00:27:00.640 [2024-07-13 06:22:07.032860] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.640 [2024-07-13 06:22:07.032986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.640 [2024-07-13 06:22:07.033010] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.640 [2024-07-13 06:22:07.033031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.640 [2024-07-13 06:22:07.033045] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.640 [2024-07-13 06:22:07.033073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.640 qpair failed and we were unable to recover it. 00:27:00.640 [2024-07-13 06:22:07.042920] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.640 [2024-07-13 06:22:07.043037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.640 [2024-07-13 06:22:07.043063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.640 [2024-07-13 06:22:07.043078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.640 [2024-07-13 06:22:07.043091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.640 [2024-07-13 06:22:07.043119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.640 qpair failed and we were unable to recover it. 00:27:00.640 [2024-07-13 06:22:07.052966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.640 [2024-07-13 06:22:07.053088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.640 [2024-07-13 06:22:07.053113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.640 [2024-07-13 06:22:07.053128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.640 [2024-07-13 06:22:07.053141] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.640 [2024-07-13 06:22:07.053169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.640 qpair failed and we were unable to recover it. 00:27:00.640 [2024-07-13 06:22:07.062990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.640 [2024-07-13 06:22:07.063108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.640 [2024-07-13 06:22:07.063136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.640 [2024-07-13 06:22:07.063150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.640 [2024-07-13 06:22:07.063164] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.640 [2024-07-13 06:22:07.063192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.640 qpair failed and we were unable to recover it. 00:27:00.640 [2024-07-13 06:22:07.072993] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.640 [2024-07-13 06:22:07.073138] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.640 [2024-07-13 06:22:07.073163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.640 [2024-07-13 06:22:07.073178] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.640 [2024-07-13 06:22:07.073191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.640 [2024-07-13 06:22:07.073219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.640 qpair failed and we were unable to recover it. 00:27:00.640 [2024-07-13 06:22:07.083050] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.640 [2024-07-13 06:22:07.083167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.640 [2024-07-13 06:22:07.083193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.640 [2024-07-13 06:22:07.083208] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.640 [2024-07-13 06:22:07.083222] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.640 [2024-07-13 06:22:07.083250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.640 qpair failed and we were unable to recover it. 00:27:00.640 [2024-07-13 06:22:07.093068] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.640 [2024-07-13 06:22:07.093192] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.640 [2024-07-13 06:22:07.093217] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.640 [2024-07-13 06:22:07.093232] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.640 [2024-07-13 06:22:07.093245] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.640 [2024-07-13 06:22:07.093274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.640 qpair failed and we were unable to recover it. 00:27:00.640 [2024-07-13 06:22:07.103128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.640 [2024-07-13 06:22:07.103245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.641 [2024-07-13 06:22:07.103271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.641 [2024-07-13 06:22:07.103286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.641 [2024-07-13 06:22:07.103299] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.641 [2024-07-13 06:22:07.103327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.641 qpair failed and we were unable to recover it. 00:27:00.641 [2024-07-13 06:22:07.113116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.641 [2024-07-13 06:22:07.113245] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.641 [2024-07-13 06:22:07.113271] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.641 [2024-07-13 06:22:07.113286] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.641 [2024-07-13 06:22:07.113299] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.641 [2024-07-13 06:22:07.113327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.641 qpair failed and we were unable to recover it. 00:27:00.641 [2024-07-13 06:22:07.123176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.641 [2024-07-13 06:22:07.123297] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.641 [2024-07-13 06:22:07.123323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.641 [2024-07-13 06:22:07.123347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.641 [2024-07-13 06:22:07.123363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.641 [2024-07-13 06:22:07.123394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.641 qpair failed and we were unable to recover it. 00:27:00.641 [2024-07-13 06:22:07.133193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.641 [2024-07-13 06:22:07.133315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.641 [2024-07-13 06:22:07.133341] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.641 [2024-07-13 06:22:07.133356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.641 [2024-07-13 06:22:07.133369] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.641 [2024-07-13 06:22:07.133397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.641 qpair failed and we were unable to recover it. 00:27:00.641 [2024-07-13 06:22:07.143204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.641 [2024-07-13 06:22:07.143329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.641 [2024-07-13 06:22:07.143354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.641 [2024-07-13 06:22:07.143370] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.641 [2024-07-13 06:22:07.143383] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.641 [2024-07-13 06:22:07.143411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.641 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-13 06:22:07.153241] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-13 06:22:07.153370] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-13 06:22:07.153406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-13 06:22:07.153426] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-13 06:22:07.153440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.900 [2024-07-13 06:22:07.153470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-13 06:22:07.163272] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-13 06:22:07.163402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-13 06:22:07.163430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-13 06:22:07.163445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-13 06:22:07.163458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.900 [2024-07-13 06:22:07.163487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-13 06:22:07.173261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-13 06:22:07.173375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-13 06:22:07.173400] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-13 06:22:07.173415] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-13 06:22:07.173428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.900 [2024-07-13 06:22:07.173456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-13 06:22:07.183340] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-13 06:22:07.183464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-13 06:22:07.183489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-13 06:22:07.183504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-13 06:22:07.183518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.900 [2024-07-13 06:22:07.183546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-13 06:22:07.193381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-13 06:22:07.193503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-13 06:22:07.193528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-13 06:22:07.193542] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-13 06:22:07.193556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.900 [2024-07-13 06:22:07.193584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-13 06:22:07.203371] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-13 06:22:07.203485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-13 06:22:07.203511] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-13 06:22:07.203525] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-13 06:22:07.203539] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.900 [2024-07-13 06:22:07.203567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-13 06:22:07.213485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-13 06:22:07.213604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-13 06:22:07.213634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-13 06:22:07.213650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-13 06:22:07.213663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.900 [2024-07-13 06:22:07.213691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-13 06:22:07.223436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-13 06:22:07.223567] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-13 06:22:07.223592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-13 06:22:07.223608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-13 06:22:07.223621] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.900 [2024-07-13 06:22:07.223648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-13 06:22:07.233521] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-13 06:22:07.233668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-13 06:22:07.233692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-13 06:22:07.233706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-13 06:22:07.233718] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.900 [2024-07-13 06:22:07.233746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-13 06:22:07.243498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-13 06:22:07.243616] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-13 06:22:07.243642] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-13 06:22:07.243657] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-13 06:22:07.243670] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.900 [2024-07-13 06:22:07.243698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-13 06:22:07.253523] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-13 06:22:07.253643] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-13 06:22:07.253668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-13 06:22:07.253683] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-13 06:22:07.253696] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.900 [2024-07-13 06:22:07.253724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-13 06:22:07.263643] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-13 06:22:07.263782] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-13 06:22:07.263809] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-13 06:22:07.263825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-13 06:22:07.263838] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.900 [2024-07-13 06:22:07.263874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-13 06:22:07.273605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.900 [2024-07-13 06:22:07.273727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.900 [2024-07-13 06:22:07.273752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.900 [2024-07-13 06:22:07.273767] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.900 [2024-07-13 06:22:07.273781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.900 [2024-07-13 06:22:07.273809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.900 qpair failed and we were unable to recover it. 00:27:00.900 [2024-07-13 06:22:07.283619] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.901 [2024-07-13 06:22:07.283739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.901 [2024-07-13 06:22:07.283764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.901 [2024-07-13 06:22:07.283779] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.901 [2024-07-13 06:22:07.283792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.901 [2024-07-13 06:22:07.283820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.901 qpair failed and we were unable to recover it. 00:27:00.901 [2024-07-13 06:22:07.293638] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.901 [2024-07-13 06:22:07.293750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.901 [2024-07-13 06:22:07.293775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.901 [2024-07-13 06:22:07.293790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.901 [2024-07-13 06:22:07.293803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.901 [2024-07-13 06:22:07.293831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.901 qpair failed and we were unable to recover it. 00:27:00.901 [2024-07-13 06:22:07.303684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.901 [2024-07-13 06:22:07.303803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.901 [2024-07-13 06:22:07.303833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.901 [2024-07-13 06:22:07.303848] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.901 [2024-07-13 06:22:07.303861] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.901 [2024-07-13 06:22:07.303899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.901 qpair failed and we were unable to recover it. 00:27:00.901 [2024-07-13 06:22:07.313696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.901 [2024-07-13 06:22:07.313827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.901 [2024-07-13 06:22:07.313852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.901 [2024-07-13 06:22:07.313873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.901 [2024-07-13 06:22:07.313889] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.901 [2024-07-13 06:22:07.313917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.901 qpair failed and we were unable to recover it. 00:27:00.901 [2024-07-13 06:22:07.323719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.901 [2024-07-13 06:22:07.323833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.901 [2024-07-13 06:22:07.323858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.901 [2024-07-13 06:22:07.323882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.901 [2024-07-13 06:22:07.323896] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.901 [2024-07-13 06:22:07.323931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.901 qpair failed and we were unable to recover it. 00:27:00.901 [2024-07-13 06:22:07.333768] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.901 [2024-07-13 06:22:07.333892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.901 [2024-07-13 06:22:07.333917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.901 [2024-07-13 06:22:07.333932] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.901 [2024-07-13 06:22:07.333945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.901 [2024-07-13 06:22:07.333973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.901 qpair failed and we were unable to recover it. 00:27:00.901 [2024-07-13 06:22:07.343808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.901 [2024-07-13 06:22:07.343946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.901 [2024-07-13 06:22:07.343971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.901 [2024-07-13 06:22:07.343986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.901 [2024-07-13 06:22:07.343999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.901 [2024-07-13 06:22:07.344033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.901 qpair failed and we were unable to recover it. 00:27:00.901 [2024-07-13 06:22:07.353831] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.901 [2024-07-13 06:22:07.353959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.901 [2024-07-13 06:22:07.353985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.901 [2024-07-13 06:22:07.354000] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.901 [2024-07-13 06:22:07.354013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.901 [2024-07-13 06:22:07.354041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.901 qpair failed and we were unable to recover it. 00:27:00.901 [2024-07-13 06:22:07.363846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.901 [2024-07-13 06:22:07.363984] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.901 [2024-07-13 06:22:07.364010] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.901 [2024-07-13 06:22:07.364025] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.901 [2024-07-13 06:22:07.364038] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.901 [2024-07-13 06:22:07.364065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.901 qpair failed and we were unable to recover it. 00:27:00.901 [2024-07-13 06:22:07.373857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.901 [2024-07-13 06:22:07.374000] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.901 [2024-07-13 06:22:07.374025] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.901 [2024-07-13 06:22:07.374040] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.901 [2024-07-13 06:22:07.374053] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.901 [2024-07-13 06:22:07.374080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.901 qpair failed and we were unable to recover it. 00:27:00.901 [2024-07-13 06:22:07.383931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.901 [2024-07-13 06:22:07.384053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.901 [2024-07-13 06:22:07.384078] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.901 [2024-07-13 06:22:07.384092] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.901 [2024-07-13 06:22:07.384106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x151d9f0 00:27:00.901 [2024-07-13 06:22:07.384133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:00.901 qpair failed and we were unable to recover it. 00:27:00.901 [2024-07-13 06:22:07.393949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.901 [2024-07-13 06:22:07.394071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.901 [2024-07-13 06:22:07.394110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.901 [2024-07-13 06:22:07.394128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.901 [2024-07-13 06:22:07.394143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:27:00.901 [2024-07-13 06:22:07.394176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.901 qpair failed and we were unable to recover it. 00:27:00.901 [2024-07-13 06:22:07.403966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:00.901 [2024-07-13 06:22:07.404092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:00.901 [2024-07-13 06:22:07.404119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:00.901 [2024-07-13 06:22:07.404135] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:00.901 [2024-07-13 06:22:07.404148] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d8c000b90 00:27:00.901 [2024-07-13 06:22:07.404178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:00.901 qpair failed and we were unable to recover it. 00:27:01.159 [2024-07-13 06:22:07.414044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.159 [2024-07-13 06:22:07.414197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.159 [2024-07-13 06:22:07.414230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.159 [2024-07-13 06:22:07.414249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.159 [2024-07-13 06:22:07.414263] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d94000b90 00:27:01.160 [2024-07-13 06:22:07.414301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.160 qpair failed and we were unable to recover it. 00:27:01.160 [2024-07-13 06:22:07.424060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.160 [2024-07-13 06:22:07.424180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.160 [2024-07-13 06:22:07.424208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.160 [2024-07-13 06:22:07.424224] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.160 [2024-07-13 06:22:07.424238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d94000b90 00:27:01.160 [2024-07-13 06:22:07.424268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.160 qpair failed and we were unable to recover it. 00:27:01.160 [2024-07-13 06:22:07.424507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x152b4b0 is same with the state(5) to be set 00:27:01.160 [2024-07-13 06:22:07.434098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.160 [2024-07-13 06:22:07.434223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.160 [2024-07-13 06:22:07.434256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.160 [2024-07-13 06:22:07.434277] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.160 [2024-07-13 06:22:07.434297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d84000b90 00:27:01.160 [2024-07-13 06:22:07.434332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:01.160 qpair failed and we were unable to recover it. 00:27:01.160 [2024-07-13 06:22:07.444130] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:01.160 [2024-07-13 06:22:07.444253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:01.160 [2024-07-13 06:22:07.444281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:01.160 [2024-07-13 06:22:07.444296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:01.160 [2024-07-13 06:22:07.444310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8d84000b90 00:27:01.160 [2024-07-13 06:22:07.444341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:01.160 qpair failed and we were unable to recover it. 00:27:01.160 [2024-07-13 06:22:07.444629] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152b4b0 (9): Bad file descriptor 00:27:01.160 Initializing NVMe Controllers 00:27:01.160 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:01.160 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:01.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:01.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:01.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:01.160 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:01.160 Initialization complete. Launching workers. 00:27:01.160 Starting thread on core 1 00:27:01.160 Starting thread on core 2 00:27:01.160 Starting thread on core 3 00:27:01.160 Starting thread on core 0 00:27:01.160 06:22:07 -- host/target_disconnect.sh@59 -- # sync 00:27:01.160 00:27:01.160 real 0m11.384s 00:27:01.160 user 0m20.376s 00:27:01.160 sys 0m5.271s 00:27:01.160 06:22:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:01.160 06:22:07 -- common/autotest_common.sh@10 -- # set +x 00:27:01.160 ************************************ 00:27:01.160 END TEST nvmf_target_disconnect_tc2 00:27:01.160 ************************************ 00:27:01.160 06:22:07 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:27:01.160 06:22:07 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:01.160 06:22:07 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:27:01.160 06:22:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:01.160 06:22:07 -- nvmf/common.sh@116 -- # sync 00:27:01.160 06:22:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:01.160 06:22:07 -- nvmf/common.sh@119 -- # set +e 00:27:01.160 06:22:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:01.160 06:22:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:01.160 rmmod nvme_tcp 00:27:01.160 rmmod nvme_fabrics 00:27:01.160 rmmod nvme_keyring 00:27:01.160 06:22:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:01.160 06:22:07 -- nvmf/common.sh@123 -- # set -e 00:27:01.160 06:22:07 -- nvmf/common.sh@124 -- # return 0 00:27:01.160 06:22:07 -- nvmf/common.sh@477 -- # '[' -n 1236666 ']' 00:27:01.160 06:22:07 -- nvmf/common.sh@478 -- # killprocess 1236666 00:27:01.160 06:22:07 -- common/autotest_common.sh@926 -- # '[' -z 1236666 ']' 00:27:01.160 06:22:07 -- common/autotest_common.sh@930 -- # kill -0 1236666 00:27:01.160 06:22:07 -- common/autotest_common.sh@931 -- # uname 00:27:01.160 06:22:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:01.160 06:22:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1236666 00:27:01.160 06:22:07 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:27:01.160 06:22:07 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:27:01.160 06:22:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1236666' 00:27:01.160 killing process with pid 1236666 00:27:01.160 06:22:07 -- common/autotest_common.sh@945 -- # kill 1236666 00:27:01.160 06:22:07 -- common/autotest_common.sh@950 -- # wait 1236666 00:27:01.418 06:22:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:01.418 06:22:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:01.418 06:22:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:01.418 06:22:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:01.418 06:22:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:01.418 06:22:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.418 06:22:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:01.418 06:22:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.950 06:22:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:03.950 00:27:03.950 real 0m15.970s 00:27:03.950 user 0m46.131s 00:27:03.950 sys 0m7.168s 00:27:03.950 06:22:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:03.950 06:22:09 -- common/autotest_common.sh@10 -- # set +x 00:27:03.950 ************************************ 00:27:03.950 END TEST nvmf_target_disconnect 00:27:03.950 ************************************ 00:27:03.950 06:22:09 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:27:03.950 06:22:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:03.950 06:22:09 -- common/autotest_common.sh@10 -- # set +x 00:27:03.950 06:22:09 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:27:03.950 00:27:03.950 real 21m2.670s 00:27:03.950 user 60m42.968s 00:27:03.950 sys 5m7.284s 00:27:03.950 06:22:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:03.950 06:22:09 -- common/autotest_common.sh@10 -- # set +x 00:27:03.950 ************************************ 00:27:03.950 END TEST nvmf_tcp 00:27:03.950 ************************************ 00:27:03.950 06:22:09 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:27:03.950 06:22:09 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:03.950 06:22:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:03.950 06:22:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:03.950 06:22:09 -- common/autotest_common.sh@10 -- # set +x 00:27:03.950 ************************************ 00:27:03.950 START TEST spdkcli_nvmf_tcp 00:27:03.950 ************************************ 00:27:03.950 06:22:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:03.950 * Looking for test storage... 00:27:03.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:03.950 06:22:10 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:03.950 06:22:10 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:03.950 06:22:10 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:03.950 06:22:10 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.950 06:22:10 -- nvmf/common.sh@7 -- # uname -s 00:27:03.950 06:22:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.950 06:22:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.950 06:22:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.950 06:22:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.950 06:22:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.950 06:22:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.950 06:22:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.950 06:22:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.950 06:22:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.950 06:22:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.950 06:22:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:03.950 06:22:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:03.950 06:22:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.950 06:22:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.950 06:22:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.950 06:22:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.950 06:22:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.950 06:22:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.950 06:22:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.950 06:22:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.950 06:22:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.950 06:22:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.950 06:22:10 -- paths/export.sh@5 -- # export PATH 00:27:03.950 06:22:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.950 06:22:10 -- nvmf/common.sh@46 -- # : 0 00:27:03.950 06:22:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:03.950 06:22:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:03.950 06:22:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:03.950 06:22:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.950 06:22:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.950 06:22:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:03.950 06:22:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:03.950 06:22:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:03.950 06:22:10 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:03.950 06:22:10 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:03.950 06:22:10 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:03.950 06:22:10 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:03.950 06:22:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:03.950 06:22:10 -- common/autotest_common.sh@10 -- # set +x 00:27:03.950 06:22:10 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:03.950 06:22:10 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1237787 00:27:03.950 06:22:10 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:03.950 06:22:10 -- spdkcli/common.sh@34 -- # waitforlisten 1237787 00:27:03.951 06:22:10 -- common/autotest_common.sh@819 -- # '[' -z 1237787 ']' 00:27:03.951 06:22:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.951 06:22:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:03.951 06:22:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.951 06:22:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:03.951 06:22:10 -- common/autotest_common.sh@10 -- # set +x 00:27:03.951 [2024-07-13 06:22:10.089051] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:03.951 [2024-07-13 06:22:10.089146] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237787 ] 00:27:03.951 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.951 [2024-07-13 06:22:10.157920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:03.951 [2024-07-13 06:22:10.270229] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:03.951 [2024-07-13 06:22:10.271902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.951 [2024-07-13 06:22:10.271908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.516 06:22:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:04.516 06:22:11 -- common/autotest_common.sh@852 -- # return 0 00:27:04.516 06:22:11 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:04.516 06:22:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:04.516 06:22:11 -- common/autotest_common.sh@10 -- # set +x 00:27:04.777 06:22:11 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:04.777 06:22:11 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:04.777 06:22:11 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:04.777 06:22:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:04.777 06:22:11 -- common/autotest_common.sh@10 -- # set +x 00:27:04.777 06:22:11 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:04.777 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:04.777 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:04.777 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:04.777 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:04.777 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:04.777 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:04.777 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:04.777 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:04.777 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:04.777 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:04.777 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:04.777 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:04.777 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:04.777 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:04.777 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:04.777 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:04.777 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:04.777 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:04.777 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:04.777 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:04.777 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:04.777 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:04.777 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:04.777 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:04.777 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:04.777 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:04.777 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:04.777 ' 00:27:05.035 [2024-07-13 06:22:11.461044] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:07.613 [2024-07-13 06:22:13.617072] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.543 [2024-07-13 06:22:14.857513] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:11.071 [2024-07-13 06:22:17.144826] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:12.970 [2024-07-13 06:22:19.139452] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:14.345 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:14.345 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:14.345 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:14.345 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:14.345 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:14.345 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:14.345 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:14.345 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:14.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:14.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:14.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:14.345 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:14.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:14.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:14.345 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:14.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:14.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:14.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:14.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:14.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:14.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:14.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:14.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:14.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:14.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:14.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:14.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:14.345 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:14.345 06:22:20 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:14.345 06:22:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:14.345 06:22:20 -- common/autotest_common.sh@10 -- # set +x 00:27:14.345 06:22:20 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:14.345 06:22:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:14.345 06:22:20 -- common/autotest_common.sh@10 -- # set +x 00:27:14.345 06:22:20 -- spdkcli/nvmf.sh@69 -- # check_match 00:27:14.345 06:22:20 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:14.911 06:22:21 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:14.911 06:22:21 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:14.911 06:22:21 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:14.911 06:22:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:14.911 06:22:21 -- common/autotest_common.sh@10 -- # set +x 00:27:14.911 06:22:21 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:14.911 06:22:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:14.911 06:22:21 -- common/autotest_common.sh@10 -- # set +x 00:27:14.911 06:22:21 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:14.911 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:14.911 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:14.911 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:14.911 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:14.911 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:14.911 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:14.911 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:14.911 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:14.911 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:14.911 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:14.911 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:14.911 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:14.911 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:14.911 ' 00:27:20.173 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:20.173 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:20.173 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:20.173 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:20.173 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:20.173 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:20.173 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:20.173 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:20.173 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:20.173 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:20.173 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:20.173 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:20.173 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:20.173 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:20.173 06:22:26 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:20.173 06:22:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:20.173 06:22:26 -- common/autotest_common.sh@10 -- # set +x 00:27:20.173 06:22:26 -- spdkcli/nvmf.sh@90 -- # killprocess 1237787 00:27:20.173 06:22:26 -- common/autotest_common.sh@926 -- # '[' -z 1237787 ']' 00:27:20.173 06:22:26 -- common/autotest_common.sh@930 -- # kill -0 1237787 00:27:20.173 06:22:26 -- common/autotest_common.sh@931 -- # uname 00:27:20.173 06:22:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:20.173 06:22:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1237787 00:27:20.173 06:22:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:20.173 06:22:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:20.173 06:22:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1237787' 00:27:20.173 killing process with pid 1237787 00:27:20.173 06:22:26 -- common/autotest_common.sh@945 -- # kill 1237787 00:27:20.173 [2024-07-13 06:22:26.536039] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:20.173 06:22:26 -- common/autotest_common.sh@950 -- # wait 1237787 00:27:20.431 06:22:26 -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:20.431 06:22:26 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:20.431 06:22:26 -- spdkcli/common.sh@13 -- # '[' -n 1237787 ']' 00:27:20.431 06:22:26 -- spdkcli/common.sh@14 -- # killprocess 1237787 00:27:20.431 06:22:26 -- common/autotest_common.sh@926 -- # '[' -z 1237787 ']' 00:27:20.431 06:22:26 -- common/autotest_common.sh@930 -- # kill -0 1237787 00:27:20.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1237787) - No such process 00:27:20.431 06:22:26 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1237787 is not found' 00:27:20.431 Process with pid 1237787 is not found 00:27:20.431 06:22:26 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:20.431 06:22:26 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:20.431 06:22:26 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:20.431 00:27:20.431 real 0m16.826s 00:27:20.431 user 0m35.653s 00:27:20.431 sys 0m0.842s 00:27:20.431 06:22:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:20.431 06:22:26 -- common/autotest_common.sh@10 -- # set +x 00:27:20.431 ************************************ 00:27:20.431 END TEST spdkcli_nvmf_tcp 00:27:20.431 ************************************ 00:27:20.431 06:22:26 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:20.431 06:22:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:20.431 06:22:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:20.431 06:22:26 -- common/autotest_common.sh@10 -- # set +x 00:27:20.431 ************************************ 00:27:20.431 START TEST nvmf_identify_passthru 00:27:20.431 ************************************ 00:27:20.431 06:22:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:20.431 * Looking for test storage... 00:27:20.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:20.431 06:22:26 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.431 06:22:26 -- nvmf/common.sh@7 -- # uname -s 00:27:20.431 06:22:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.431 06:22:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.431 06:22:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.431 06:22:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.431 06:22:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.431 06:22:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.431 06:22:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.431 06:22:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.431 06:22:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.431 06:22:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.431 06:22:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:20.431 06:22:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:20.431 06:22:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.431 06:22:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.431 06:22:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.431 06:22:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.431 06:22:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.431 06:22:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.431 06:22:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.431 06:22:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.431 06:22:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.431 06:22:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.431 06:22:26 -- paths/export.sh@5 -- # export PATH 00:27:20.431 06:22:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.431 06:22:26 -- nvmf/common.sh@46 -- # : 0 00:27:20.431 06:22:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:20.431 06:22:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:20.431 06:22:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:20.431 06:22:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.431 06:22:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.431 06:22:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:20.431 06:22:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:20.431 06:22:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:20.431 06:22:26 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.431 06:22:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.431 06:22:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.431 06:22:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.431 06:22:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.431 06:22:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.431 06:22:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.431 06:22:26 -- paths/export.sh@5 -- # export PATH 00:27:20.431 06:22:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.431 06:22:26 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:20.431 06:22:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:20.431 06:22:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.431 06:22:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:20.431 06:22:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:20.431 06:22:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:20.431 06:22:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.431 06:22:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:20.431 06:22:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.431 06:22:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:20.431 06:22:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:20.431 06:22:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:20.431 06:22:26 -- common/autotest_common.sh@10 -- # set +x 00:27:22.332 06:22:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:22.332 06:22:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:22.332 06:22:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:22.332 06:22:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:22.332 06:22:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:22.332 06:22:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:22.332 06:22:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:22.332 06:22:28 -- nvmf/common.sh@294 -- # net_devs=() 00:27:22.332 06:22:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:22.332 06:22:28 -- nvmf/common.sh@295 -- # e810=() 00:27:22.332 06:22:28 -- nvmf/common.sh@295 -- # local -ga e810 00:27:22.332 06:22:28 -- nvmf/common.sh@296 -- # x722=() 00:27:22.332 06:22:28 -- nvmf/common.sh@296 -- # local -ga x722 00:27:22.332 06:22:28 -- nvmf/common.sh@297 -- # mlx=() 00:27:22.332 06:22:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:22.332 06:22:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:22.332 06:22:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:22.332 06:22:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:22.332 06:22:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:22.332 06:22:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:22.332 06:22:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:22.332 06:22:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:22.332 06:22:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:22.332 06:22:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:22.332 06:22:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:22.332 06:22:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:22.332 06:22:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:22.332 06:22:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:22.332 06:22:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:22.332 06:22:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:22.332 06:22:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:22.332 06:22:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:22.332 06:22:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:22.332 06:22:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:22.332 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:22.332 06:22:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:22.332 06:22:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:22.332 06:22:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.332 06:22:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.332 06:22:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:22.332 06:22:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:22.332 06:22:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:22.332 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:22.332 06:22:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:22.332 06:22:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:22.332 06:22:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.332 06:22:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.332 06:22:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:22.332 06:22:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:22.332 06:22:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:22.332 06:22:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:22.332 06:22:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:22.332 06:22:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.332 06:22:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:22.332 06:22:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.332 06:22:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:22.332 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:22.332 06:22:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.332 06:22:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:22.332 06:22:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.332 06:22:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:22.332 06:22:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.332 06:22:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:22.332 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:22.332 06:22:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.332 06:22:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:22.332 06:22:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:22.332 06:22:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:22.332 06:22:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:22.332 06:22:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:22.332 06:22:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:22.332 06:22:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:22.332 06:22:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:22.332 06:22:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:22.332 06:22:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:22.332 06:22:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:22.332 06:22:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:22.332 06:22:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:22.332 06:22:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:22.332 06:22:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:22.332 06:22:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:22.332 06:22:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:22.332 06:22:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:22.590 06:22:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:22.590 06:22:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:22.590 06:22:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:22.590 06:22:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:22.590 06:22:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:22.590 06:22:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:22.590 06:22:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:22.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:27:22.590 00:27:22.590 --- 10.0.0.2 ping statistics --- 00:27:22.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.591 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:27:22.591 06:22:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:22.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:27:22.591 00:27:22.591 --- 10.0.0.1 ping statistics --- 00:27:22.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.591 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:27:22.591 06:22:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.591 06:22:28 -- nvmf/common.sh@410 -- # return 0 00:27:22.591 06:22:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:22.591 06:22:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.591 06:22:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:22.591 06:22:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:22.591 06:22:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.591 06:22:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:22.591 06:22:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:22.591 06:22:28 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:22.591 06:22:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:22.591 06:22:28 -- common/autotest_common.sh@10 -- # set +x 00:27:22.591 06:22:28 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:22.591 06:22:28 -- common/autotest_common.sh@1509 -- # bdfs=() 00:27:22.591 06:22:28 -- common/autotest_common.sh@1509 -- # local bdfs 00:27:22.591 06:22:28 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:27:22.591 06:22:28 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:27:22.591 06:22:28 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:22.591 06:22:28 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:22.591 06:22:28 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:22.591 06:22:28 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:22.591 06:22:28 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:22.591 06:22:29 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:22.591 06:22:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:27:22.591 06:22:29 -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:27:22.591 06:22:29 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:27:22.591 06:22:29 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:27:22.591 06:22:29 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:27:22.591 06:22:29 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:22.591 06:22:29 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:22.591 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.771 06:22:33 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:27:26.771 06:22:33 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:27:26.771 06:22:33 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:26.771 06:22:33 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:26.771 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.003 06:22:37 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:31.003 06:22:37 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:31.003 06:22:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:31.003 06:22:37 -- common/autotest_common.sh@10 -- # set +x 00:27:31.003 06:22:37 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:31.003 06:22:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:31.003 06:22:37 -- common/autotest_common.sh@10 -- # set +x 00:27:31.003 06:22:37 -- target/identify_passthru.sh@31 -- # nvmfpid=1242514 00:27:31.003 06:22:37 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:31.003 06:22:37 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:31.003 06:22:37 -- target/identify_passthru.sh@35 -- # waitforlisten 1242514 00:27:31.003 06:22:37 -- common/autotest_common.sh@819 -- # '[' -z 1242514 ']' 00:27:31.003 06:22:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.003 06:22:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:31.003 06:22:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.003 06:22:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:31.003 06:22:37 -- common/autotest_common.sh@10 -- # set +x 00:27:31.003 [2024-07-13 06:22:37.476745] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:31.003 [2024-07-13 06:22:37.476835] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:31.003 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.263 [2024-07-13 06:22:37.543483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:31.263 [2024-07-13 06:22:37.652192] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:31.263 [2024-07-13 06:22:37.652356] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:31.263 [2024-07-13 06:22:37.652373] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:31.263 [2024-07-13 06:22:37.652386] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:31.263 [2024-07-13 06:22:37.652439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.263 [2024-07-13 06:22:37.652500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:31.263 [2024-07-13 06:22:37.652567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:31.263 [2024-07-13 06:22:37.652570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.263 06:22:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:31.263 06:22:37 -- common/autotest_common.sh@852 -- # return 0 00:27:31.263 06:22:37 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:31.263 06:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:31.263 06:22:37 -- common/autotest_common.sh@10 -- # set +x 00:27:31.263 INFO: Log level set to 20 00:27:31.263 INFO: Requests: 00:27:31.263 { 00:27:31.263 "jsonrpc": "2.0", 00:27:31.263 "method": "nvmf_set_config", 00:27:31.263 "id": 1, 00:27:31.263 "params": { 00:27:31.263 "admin_cmd_passthru": { 00:27:31.263 "identify_ctrlr": true 00:27:31.263 } 00:27:31.263 } 00:27:31.263 } 00:27:31.263 00:27:31.263 INFO: response: 00:27:31.263 { 00:27:31.263 "jsonrpc": "2.0", 00:27:31.263 "id": 1, 00:27:31.263 "result": true 00:27:31.263 } 00:27:31.263 00:27:31.263 06:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:31.263 06:22:37 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:31.263 06:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:31.263 06:22:37 -- common/autotest_common.sh@10 -- # set +x 00:27:31.263 INFO: Setting log level to 20 00:27:31.263 INFO: Setting log level to 20 00:27:31.263 INFO: Log level set to 20 00:27:31.263 INFO: Log level set to 20 00:27:31.263 INFO: Requests: 00:27:31.263 { 00:27:31.263 "jsonrpc": "2.0", 00:27:31.263 "method": "framework_start_init", 00:27:31.263 "id": 1 00:27:31.263 } 00:27:31.263 00:27:31.263 INFO: Requests: 00:27:31.263 { 00:27:31.263 "jsonrpc": "2.0", 00:27:31.263 "method": "framework_start_init", 00:27:31.263 "id": 1 00:27:31.263 } 00:27:31.263 00:27:31.522 [2024-07-13 06:22:37.794237] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:31.522 INFO: response: 00:27:31.522 { 00:27:31.522 "jsonrpc": "2.0", 00:27:31.522 "id": 1, 00:27:31.522 "result": true 00:27:31.522 } 00:27:31.522 00:27:31.522 INFO: response: 00:27:31.522 { 00:27:31.522 "jsonrpc": "2.0", 00:27:31.522 "id": 1, 00:27:31.522 "result": true 00:27:31.522 } 00:27:31.522 00:27:31.522 06:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:31.522 06:22:37 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:31.522 06:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:31.522 06:22:37 -- common/autotest_common.sh@10 -- # set +x 00:27:31.522 INFO: Setting log level to 40 00:27:31.522 INFO: Setting log level to 40 00:27:31.522 INFO: Setting log level to 40 00:27:31.522 [2024-07-13 06:22:37.804319] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:31.522 06:22:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:31.522 06:22:37 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:31.522 06:22:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:31.522 06:22:37 -- common/autotest_common.sh@10 -- # set +x 00:27:31.522 06:22:37 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:27:31.522 06:22:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:31.522 06:22:37 -- common/autotest_common.sh@10 -- # set +x 00:27:34.811 Nvme0n1 00:27:34.811 06:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:34.811 06:22:40 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:34.811 06:22:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:34.811 06:22:40 -- common/autotest_common.sh@10 -- # set +x 00:27:34.811 06:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:34.811 06:22:40 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:34.811 06:22:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:34.811 06:22:40 -- common/autotest_common.sh@10 -- # set +x 00:27:34.811 06:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:34.812 06:22:40 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:34.812 06:22:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:34.812 06:22:40 -- common/autotest_common.sh@10 -- # set +x 00:27:34.812 [2024-07-13 06:22:40.695970] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.812 06:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:34.812 06:22:40 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:34.812 06:22:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:34.812 06:22:40 -- common/autotest_common.sh@10 -- # set +x 00:27:34.812 [2024-07-13 06:22:40.703701] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:34.812 [ 00:27:34.812 { 00:27:34.812 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:34.812 "subtype": "Discovery", 00:27:34.812 "listen_addresses": [], 00:27:34.812 "allow_any_host": true, 00:27:34.812 "hosts": [] 00:27:34.812 }, 00:27:34.812 { 00:27:34.812 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:34.812 "subtype": "NVMe", 00:27:34.812 "listen_addresses": [ 00:27:34.812 { 00:27:34.812 "transport": "TCP", 00:27:34.812 "trtype": "TCP", 00:27:34.812 "adrfam": "IPv4", 00:27:34.812 "traddr": "10.0.0.2", 00:27:34.812 "trsvcid": "4420" 00:27:34.812 } 00:27:34.812 ], 00:27:34.812 "allow_any_host": true, 00:27:34.812 "hosts": [], 00:27:34.812 "serial_number": "SPDK00000000000001", 00:27:34.812 "model_number": "SPDK bdev Controller", 00:27:34.812 "max_namespaces": 1, 00:27:34.812 "min_cntlid": 1, 00:27:34.812 "max_cntlid": 65519, 00:27:34.812 "namespaces": [ 00:27:34.812 { 00:27:34.812 "nsid": 1, 00:27:34.812 "bdev_name": "Nvme0n1", 00:27:34.812 "name": "Nvme0n1", 00:27:34.812 "nguid": "AE9C0714DF4345ACB4E7F31AB33C44AE", 00:27:34.812 "uuid": "ae9c0714-df43-45ac-b4e7-f31ab33c44ae" 00:27:34.812 } 00:27:34.812 ] 00:27:34.812 } 00:27:34.812 ] 00:27:34.812 06:22:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:34.812 06:22:40 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:34.812 06:22:40 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:34.812 06:22:40 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:34.812 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.812 06:22:40 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:27:34.812 06:22:40 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:34.812 06:22:40 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:34.812 06:22:40 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:34.812 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.812 06:22:41 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:34.812 06:22:41 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:27:34.812 06:22:41 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:34.812 06:22:41 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:34.812 06:22:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:34.812 06:22:41 -- common/autotest_common.sh@10 -- # set +x 00:27:34.812 06:22:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:34.812 06:22:41 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:34.812 06:22:41 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:34.812 06:22:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:34.812 06:22:41 -- nvmf/common.sh@116 -- # sync 00:27:34.812 06:22:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:34.812 06:22:41 -- nvmf/common.sh@119 -- # set +e 00:27:34.812 06:22:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:34.812 06:22:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:34.812 rmmod nvme_tcp 00:27:34.812 rmmod nvme_fabrics 00:27:34.812 rmmod nvme_keyring 00:27:34.812 06:22:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:34.812 06:22:41 -- nvmf/common.sh@123 -- # set -e 00:27:34.812 06:22:41 -- nvmf/common.sh@124 -- # return 0 00:27:34.812 06:22:41 -- nvmf/common.sh@477 -- # '[' -n 1242514 ']' 00:27:34.812 06:22:41 -- nvmf/common.sh@478 -- # killprocess 1242514 00:27:34.812 06:22:41 -- common/autotest_common.sh@926 -- # '[' -z 1242514 ']' 00:27:34.812 06:22:41 -- common/autotest_common.sh@930 -- # kill -0 1242514 00:27:34.812 06:22:41 -- common/autotest_common.sh@931 -- # uname 00:27:34.812 06:22:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:34.812 06:22:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1242514 00:27:34.812 06:22:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:34.812 06:22:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:34.812 06:22:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1242514' 00:27:34.812 killing process with pid 1242514 00:27:34.812 06:22:41 -- common/autotest_common.sh@945 -- # kill 1242514 00:27:34.812 [2024-07-13 06:22:41.157384] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:34.812 06:22:41 -- common/autotest_common.sh@950 -- # wait 1242514 00:27:36.716 06:22:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:36.716 06:22:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:36.716 06:22:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:36.716 06:22:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:36.716 06:22:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:36.716 06:22:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.716 06:22:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:36.716 06:22:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.618 06:22:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:38.618 00:27:38.618 real 0m17.974s 00:27:38.618 user 0m26.675s 00:27:38.618 sys 0m2.290s 00:27:38.618 06:22:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:38.618 06:22:44 -- common/autotest_common.sh@10 -- # set +x 00:27:38.618 ************************************ 00:27:38.618 END TEST nvmf_identify_passthru 00:27:38.618 ************************************ 00:27:38.618 06:22:44 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:38.618 06:22:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:38.618 06:22:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:38.618 06:22:44 -- common/autotest_common.sh@10 -- # set +x 00:27:38.618 ************************************ 00:27:38.618 START TEST nvmf_dif 00:27:38.618 ************************************ 00:27:38.618 06:22:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:38.618 * Looking for test storage... 00:27:38.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:38.618 06:22:44 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.618 06:22:44 -- nvmf/common.sh@7 -- # uname -s 00:27:38.618 06:22:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.618 06:22:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.618 06:22:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.618 06:22:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.618 06:22:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.618 06:22:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.618 06:22:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.618 06:22:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.618 06:22:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.618 06:22:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.618 06:22:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:38.618 06:22:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:38.618 06:22:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.618 06:22:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.618 06:22:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.618 06:22:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.618 06:22:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.618 06:22:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.618 06:22:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.618 06:22:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.618 06:22:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.618 06:22:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.618 06:22:44 -- paths/export.sh@5 -- # export PATH 00:27:38.618 06:22:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.618 06:22:44 -- nvmf/common.sh@46 -- # : 0 00:27:38.618 06:22:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:38.618 06:22:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:38.618 06:22:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:38.618 06:22:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.618 06:22:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.618 06:22:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:38.618 06:22:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:38.618 06:22:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:38.618 06:22:44 -- target/dif.sh@15 -- # NULL_META=16 00:27:38.618 06:22:44 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:38.618 06:22:44 -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:38.618 06:22:44 -- target/dif.sh@15 -- # NULL_DIF=1 00:27:38.618 06:22:44 -- target/dif.sh@135 -- # nvmftestinit 00:27:38.618 06:22:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:38.618 06:22:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.618 06:22:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:38.618 06:22:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:38.618 06:22:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:38.618 06:22:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.618 06:22:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:38.618 06:22:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.618 06:22:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:38.618 06:22:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:38.618 06:22:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:38.618 06:22:44 -- common/autotest_common.sh@10 -- # set +x 00:27:40.523 06:22:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:40.523 06:22:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:40.523 06:22:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:40.523 06:22:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:40.523 06:22:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:40.523 06:22:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:40.523 06:22:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:40.523 06:22:46 -- nvmf/common.sh@294 -- # net_devs=() 00:27:40.523 06:22:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:40.523 06:22:46 -- nvmf/common.sh@295 -- # e810=() 00:27:40.523 06:22:46 -- nvmf/common.sh@295 -- # local -ga e810 00:27:40.523 06:22:46 -- nvmf/common.sh@296 -- # x722=() 00:27:40.523 06:22:46 -- nvmf/common.sh@296 -- # local -ga x722 00:27:40.523 06:22:46 -- nvmf/common.sh@297 -- # mlx=() 00:27:40.523 06:22:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:40.523 06:22:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.523 06:22:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.523 06:22:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.523 06:22:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.523 06:22:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.523 06:22:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.523 06:22:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.523 06:22:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.523 06:22:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.523 06:22:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.523 06:22:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.523 06:22:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:40.523 06:22:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:40.523 06:22:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:40.523 06:22:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:40.523 06:22:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:40.523 06:22:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:40.523 06:22:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:40.523 06:22:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:40.523 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:40.523 06:22:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:40.523 06:22:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:40.523 06:22:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.523 06:22:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.523 06:22:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:40.523 06:22:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:40.523 06:22:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:40.523 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:40.523 06:22:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:40.523 06:22:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:40.523 06:22:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.523 06:22:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.523 06:22:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:40.523 06:22:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:40.523 06:22:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:40.523 06:22:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:40.523 06:22:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:40.523 06:22:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.523 06:22:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:40.523 06:22:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.523 06:22:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:40.523 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:40.523 06:22:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.523 06:22:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:40.523 06:22:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.523 06:22:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:40.523 06:22:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.523 06:22:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:40.523 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:40.523 06:22:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.523 06:22:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:40.523 06:22:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:40.523 06:22:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:40.523 06:22:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:40.523 06:22:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:40.523 06:22:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.523 06:22:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.523 06:22:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.523 06:22:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:40.523 06:22:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.523 06:22:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.523 06:22:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:40.523 06:22:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.523 06:22:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.524 06:22:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:40.524 06:22:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:40.524 06:22:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.524 06:22:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.524 06:22:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.524 06:22:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.524 06:22:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:40.524 06:22:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.524 06:22:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.524 06:22:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.524 06:22:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:40.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:27:40.524 00:27:40.524 --- 10.0.0.2 ping statistics --- 00:27:40.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.524 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:27:40.524 06:22:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:27:40.524 00:27:40.524 --- 10.0.0.1 ping statistics --- 00:27:40.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.524 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:27:40.524 06:22:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.524 06:22:46 -- nvmf/common.sh@410 -- # return 0 00:27:40.524 06:22:46 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:27:40.524 06:22:46 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:41.459 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:41.459 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:41.459 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:41.459 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:41.459 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:41.459 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:41.459 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:41.459 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:41.459 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:41.459 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:41.459 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:41.459 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:41.459 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:41.716 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:41.716 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:41.716 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:41.716 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:41.716 06:22:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.716 06:22:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:41.716 06:22:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:41.716 06:22:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.716 06:22:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:41.716 06:22:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:41.716 06:22:48 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:41.716 06:22:48 -- target/dif.sh@137 -- # nvmfappstart 00:27:41.716 06:22:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:41.716 06:22:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:41.716 06:22:48 -- common/autotest_common.sh@10 -- # set +x 00:27:41.716 06:22:48 -- nvmf/common.sh@469 -- # nvmfpid=1245826 00:27:41.716 06:22:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:41.716 06:22:48 -- nvmf/common.sh@470 -- # waitforlisten 1245826 00:27:41.716 06:22:48 -- common/autotest_common.sh@819 -- # '[' -z 1245826 ']' 00:27:41.716 06:22:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.716 06:22:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:41.716 06:22:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.717 06:22:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:41.717 06:22:48 -- common/autotest_common.sh@10 -- # set +x 00:27:41.717 [2024-07-13 06:22:48.195326] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:41.717 [2024-07-13 06:22:48.195410] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.976 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.976 [2024-07-13 06:22:48.264465] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.976 [2024-07-13 06:22:48.379626] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:41.976 [2024-07-13 06:22:48.379785] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.976 [2024-07-13 06:22:48.379804] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.976 [2024-07-13 06:22:48.379818] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.976 [2024-07-13 06:22:48.379849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.916 06:22:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:42.916 06:22:49 -- common/autotest_common.sh@852 -- # return 0 00:27:42.916 06:22:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:42.916 06:22:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:42.916 06:22:49 -- common/autotest_common.sh@10 -- # set +x 00:27:42.916 06:22:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.916 06:22:49 -- target/dif.sh@139 -- # create_transport 00:27:42.916 06:22:49 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:42.916 06:22:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:42.916 06:22:49 -- common/autotest_common.sh@10 -- # set +x 00:27:42.916 [2024-07-13 06:22:49.197630] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.916 06:22:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:42.916 06:22:49 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:42.916 06:22:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:42.916 06:22:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:42.916 06:22:49 -- common/autotest_common.sh@10 -- # set +x 00:27:42.916 ************************************ 00:27:42.916 START TEST fio_dif_1_default 00:27:42.916 ************************************ 00:27:42.916 06:22:49 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:27:42.916 06:22:49 -- target/dif.sh@86 -- # create_subsystems 0 00:27:42.916 06:22:49 -- target/dif.sh@28 -- # local sub 00:27:42.916 06:22:49 -- target/dif.sh@30 -- # for sub in "$@" 00:27:42.916 06:22:49 -- target/dif.sh@31 -- # create_subsystem 0 00:27:42.916 06:22:49 -- target/dif.sh@18 -- # local sub_id=0 00:27:42.916 06:22:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:42.916 06:22:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:42.916 06:22:49 -- common/autotest_common.sh@10 -- # set +x 00:27:42.916 bdev_null0 00:27:42.916 06:22:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:42.916 06:22:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:42.916 06:22:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:42.916 06:22:49 -- common/autotest_common.sh@10 -- # set +x 00:27:42.916 06:22:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:42.916 06:22:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:42.916 06:22:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:42.916 06:22:49 -- common/autotest_common.sh@10 -- # set +x 00:27:42.916 06:22:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:42.916 06:22:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:42.916 06:22:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:42.916 06:22:49 -- common/autotest_common.sh@10 -- # set +x 00:27:42.916 [2024-07-13 06:22:49.233894] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.916 06:22:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:42.916 06:22:49 -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:42.916 06:22:49 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:42.916 06:22:49 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:42.916 06:22:49 -- nvmf/common.sh@520 -- # config=() 00:27:42.916 06:22:49 -- nvmf/common.sh@520 -- # local subsystem config 00:27:42.916 06:22:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:42.916 06:22:49 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:42.916 06:22:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:42.916 { 00:27:42.916 "params": { 00:27:42.916 "name": "Nvme$subsystem", 00:27:42.916 "trtype": "$TEST_TRANSPORT", 00:27:42.916 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:42.916 "adrfam": "ipv4", 00:27:42.916 "trsvcid": "$NVMF_PORT", 00:27:42.916 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:42.916 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:42.916 "hdgst": ${hdgst:-false}, 00:27:42.916 "ddgst": ${ddgst:-false} 00:27:42.916 }, 00:27:42.916 "method": "bdev_nvme_attach_controller" 00:27:42.916 } 00:27:42.916 EOF 00:27:42.916 )") 00:27:42.916 06:22:49 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:42.916 06:22:49 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:42.916 06:22:49 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:42.916 06:22:49 -- target/dif.sh@82 -- # gen_fio_conf 00:27:42.916 06:22:49 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:42.916 06:22:49 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:42.916 06:22:49 -- target/dif.sh@54 -- # local file 00:27:42.916 06:22:49 -- common/autotest_common.sh@1320 -- # shift 00:27:42.916 06:22:49 -- target/dif.sh@56 -- # cat 00:27:42.916 06:22:49 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:42.916 06:22:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:42.916 06:22:49 -- nvmf/common.sh@542 -- # cat 00:27:42.916 06:22:49 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:42.916 06:22:49 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:42.916 06:22:49 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:42.916 06:22:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:42.916 06:22:49 -- target/dif.sh@72 -- # (( file <= files )) 00:27:42.916 06:22:49 -- nvmf/common.sh@544 -- # jq . 00:27:42.916 06:22:49 -- nvmf/common.sh@545 -- # IFS=, 00:27:42.916 06:22:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:42.916 "params": { 00:27:42.916 "name": "Nvme0", 00:27:42.916 "trtype": "tcp", 00:27:42.916 "traddr": "10.0.0.2", 00:27:42.916 "adrfam": "ipv4", 00:27:42.916 "trsvcid": "4420", 00:27:42.916 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:42.916 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:42.916 "hdgst": false, 00:27:42.916 "ddgst": false 00:27:42.916 }, 00:27:42.916 "method": "bdev_nvme_attach_controller" 00:27:42.916 }' 00:27:42.916 06:22:49 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:42.916 06:22:49 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:42.916 06:22:49 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:42.916 06:22:49 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:42.916 06:22:49 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:42.916 06:22:49 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:42.916 06:22:49 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:42.916 06:22:49 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:42.916 06:22:49 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:42.916 06:22:49 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:43.176 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:43.176 fio-3.35 00:27:43.176 Starting 1 thread 00:27:43.176 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.435 [2024-07-13 06:22:49.891590] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:27:43.435 [2024-07-13 06:22:49.891667] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:27:55.671 00:27:55.671 filename0: (groupid=0, jobs=1): err= 0: pid=1246076: Sat Jul 13 06:23:00 2024 00:27:55.671 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10013msec) 00:27:55.671 slat (nsec): min=4247, max=46331, avg=9342.01, stdev=3226.72 00:27:55.671 clat (usec): min=40879, max=48387, avg=41005.48, stdev=477.10 00:27:55.671 lat (usec): min=40886, max=48412, avg=41014.82, stdev=477.18 00:27:55.671 clat percentiles (usec): 00:27:55.671 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:27:55.671 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:55.671 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:55.671 | 99.00th=[41157], 99.50th=[41681], 99.90th=[48497], 99.95th=[48497], 00:27:55.671 | 99.99th=[48497] 00:27:55.671 bw ( KiB/s): min= 384, max= 416, per=99.51%, avg=388.80, stdev=11.72, samples=20 00:27:55.671 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:27:55.671 lat (msec) : 50=100.00% 00:27:55.671 cpu : usr=90.34%, sys=9.40%, ctx=13, majf=0, minf=250 00:27:55.671 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:55.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.671 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.671 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:55.671 00:27:55.671 Run status group 0 (all jobs): 00:27:55.671 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10013-10013msec 00:27:55.671 06:23:00 -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:55.671 06:23:00 -- target/dif.sh@43 -- # local sub 00:27:55.671 06:23:00 -- target/dif.sh@45 -- # for sub in "$@" 00:27:55.671 06:23:00 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:55.671 06:23:00 -- target/dif.sh@36 -- # local sub_id=0 00:27:55.671 06:23:00 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:55.672 06:23:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.672 06:23:00 -- common/autotest_common.sh@10 -- # set +x 00:27:55.672 06:23:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.672 06:23:00 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:55.672 06:23:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.672 06:23:00 -- common/autotest_common.sh@10 -- # set +x 00:27:55.672 06:23:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.672 00:27:55.672 real 0m11.052s 00:27:55.672 user 0m10.077s 00:27:55.672 sys 0m1.186s 00:27:55.672 06:23:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:55.672 06:23:00 -- common/autotest_common.sh@10 -- # set +x 00:27:55.672 ************************************ 00:27:55.672 END TEST fio_dif_1_default 00:27:55.672 ************************************ 00:27:55.672 06:23:00 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:55.672 06:23:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:55.672 06:23:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:55.672 06:23:00 -- common/autotest_common.sh@10 -- # set +x 00:27:55.672 ************************************ 00:27:55.672 START TEST fio_dif_1_multi_subsystems 00:27:55.672 ************************************ 00:27:55.672 06:23:00 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:27:55.672 06:23:00 -- target/dif.sh@92 -- # local files=1 00:27:55.672 06:23:00 -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:55.672 06:23:00 -- target/dif.sh@28 -- # local sub 00:27:55.672 06:23:00 -- target/dif.sh@30 -- # for sub in "$@" 00:27:55.672 06:23:00 -- target/dif.sh@31 -- # create_subsystem 0 00:27:55.672 06:23:00 -- target/dif.sh@18 -- # local sub_id=0 00:27:55.672 06:23:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:55.672 06:23:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.672 06:23:00 -- common/autotest_common.sh@10 -- # set +x 00:27:55.672 bdev_null0 00:27:55.672 06:23:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.672 06:23:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:55.672 06:23:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.672 06:23:00 -- common/autotest_common.sh@10 -- # set +x 00:27:55.672 06:23:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.672 06:23:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:55.672 06:23:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.672 06:23:00 -- common/autotest_common.sh@10 -- # set +x 00:27:55.672 06:23:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.672 06:23:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:55.672 06:23:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.672 06:23:00 -- common/autotest_common.sh@10 -- # set +x 00:27:55.672 [2024-07-13 06:23:00.314146] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.672 06:23:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.672 06:23:00 -- target/dif.sh@30 -- # for sub in "$@" 00:27:55.672 06:23:00 -- target/dif.sh@31 -- # create_subsystem 1 00:27:55.672 06:23:00 -- target/dif.sh@18 -- # local sub_id=1 00:27:55.672 06:23:00 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:55.672 06:23:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.672 06:23:00 -- common/autotest_common.sh@10 -- # set +x 00:27:55.672 bdev_null1 00:27:55.672 06:23:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.672 06:23:00 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:55.672 06:23:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.672 06:23:00 -- common/autotest_common.sh@10 -- # set +x 00:27:55.672 06:23:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.672 06:23:00 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:55.672 06:23:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.672 06:23:00 -- common/autotest_common.sh@10 -- # set +x 00:27:55.672 06:23:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.672 06:23:00 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:55.672 06:23:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:55.672 06:23:00 -- common/autotest_common.sh@10 -- # set +x 00:27:55.672 06:23:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:55.672 06:23:00 -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:55.672 06:23:00 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:55.672 06:23:00 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:55.672 06:23:00 -- nvmf/common.sh@520 -- # config=() 00:27:55.672 06:23:00 -- nvmf/common.sh@520 -- # local subsystem config 00:27:55.672 06:23:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:55.672 06:23:00 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.672 06:23:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:55.672 { 00:27:55.672 "params": { 00:27:55.672 "name": "Nvme$subsystem", 00:27:55.672 "trtype": "$TEST_TRANSPORT", 00:27:55.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.672 "adrfam": "ipv4", 00:27:55.672 "trsvcid": "$NVMF_PORT", 00:27:55.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.672 "hdgst": ${hdgst:-false}, 00:27:55.672 "ddgst": ${ddgst:-false} 00:27:55.672 }, 00:27:55.672 "method": "bdev_nvme_attach_controller" 00:27:55.672 } 00:27:55.672 EOF 00:27:55.672 )") 00:27:55.672 06:23:00 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.672 06:23:00 -- target/dif.sh@82 -- # gen_fio_conf 00:27:55.672 06:23:00 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:55.672 06:23:00 -- target/dif.sh@54 -- # local file 00:27:55.672 06:23:00 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:55.672 06:23:00 -- target/dif.sh@56 -- # cat 00:27:55.672 06:23:00 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:55.672 06:23:00 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:55.672 06:23:00 -- common/autotest_common.sh@1320 -- # shift 00:27:55.672 06:23:00 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:55.672 06:23:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.672 06:23:00 -- nvmf/common.sh@542 -- # cat 00:27:55.672 06:23:00 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:55.672 06:23:00 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:55.672 06:23:00 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:55.672 06:23:00 -- target/dif.sh@72 -- # (( file <= files )) 00:27:55.672 06:23:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:55.672 06:23:00 -- target/dif.sh@73 -- # cat 00:27:55.672 06:23:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:55.672 06:23:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:55.672 { 00:27:55.672 "params": { 00:27:55.672 "name": "Nvme$subsystem", 00:27:55.672 "trtype": "$TEST_TRANSPORT", 00:27:55.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.672 "adrfam": "ipv4", 00:27:55.672 "trsvcid": "$NVMF_PORT", 00:27:55.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.672 "hdgst": ${hdgst:-false}, 00:27:55.672 "ddgst": ${ddgst:-false} 00:27:55.672 }, 00:27:55.672 "method": "bdev_nvme_attach_controller" 00:27:55.672 } 00:27:55.672 EOF 00:27:55.672 )") 00:27:55.672 06:23:00 -- nvmf/common.sh@542 -- # cat 00:27:55.672 06:23:00 -- target/dif.sh@72 -- # (( file++ )) 00:27:55.672 06:23:00 -- target/dif.sh@72 -- # (( file <= files )) 00:27:55.672 06:23:00 -- nvmf/common.sh@544 -- # jq . 00:27:55.672 06:23:00 -- nvmf/common.sh@545 -- # IFS=, 00:27:55.672 06:23:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:55.672 "params": { 00:27:55.672 "name": "Nvme0", 00:27:55.672 "trtype": "tcp", 00:27:55.672 "traddr": "10.0.0.2", 00:27:55.672 "adrfam": "ipv4", 00:27:55.672 "trsvcid": "4420", 00:27:55.672 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:55.672 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:55.672 "hdgst": false, 00:27:55.672 "ddgst": false 00:27:55.672 }, 00:27:55.672 "method": "bdev_nvme_attach_controller" 00:27:55.672 },{ 00:27:55.672 "params": { 00:27:55.672 "name": "Nvme1", 00:27:55.672 "trtype": "tcp", 00:27:55.672 "traddr": "10.0.0.2", 00:27:55.672 "adrfam": "ipv4", 00:27:55.672 "trsvcid": "4420", 00:27:55.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:55.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:55.672 "hdgst": false, 00:27:55.672 "ddgst": false 00:27:55.672 }, 00:27:55.672 "method": "bdev_nvme_attach_controller" 00:27:55.672 }' 00:27:55.672 06:23:00 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:55.672 06:23:00 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:55.672 06:23:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.672 06:23:00 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:55.672 06:23:00 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:55.672 06:23:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:55.672 06:23:00 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:55.672 06:23:00 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:55.672 06:23:00 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:55.672 06:23:00 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.672 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:55.672 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:55.672 fio-3.35 00:27:55.672 Starting 2 threads 00:27:55.672 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.672 [2024-07-13 06:23:01.185050] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:27:55.672 [2024-07-13 06:23:01.185110] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:05.636 00:28:05.636 filename0: (groupid=0, jobs=1): err= 0: pid=1247518: Sat Jul 13 06:23:11 2024 00:28:05.636 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10023msec) 00:28:05.636 slat (nsec): min=6463, max=37697, avg=8545.01, stdev=2568.88 00:28:05.636 clat (usec): min=40851, max=46019, avg=41558.20, stdev=596.79 00:28:05.636 lat (usec): min=40863, max=46057, avg=41566.74, stdev=597.03 00:28:05.636 clat percentiles (usec): 00:28:05.636 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:28:05.636 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:28:05.636 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:28:05.636 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:28:05.636 | 99.99th=[45876] 00:28:05.636 bw ( KiB/s): min= 383, max= 384, per=33.51%, avg=383.95, stdev= 0.22, samples=20 00:28:05.636 iops : min= 95, max= 96, avg=95.95, stdev= 0.22, samples=20 00:28:05.636 lat (msec) : 50=100.00% 00:28:05.636 cpu : usr=94.34%, sys=5.36%, ctx=16, majf=0, minf=148 00:28:05.636 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:05.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.636 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.636 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:05.636 filename1: (groupid=0, jobs=1): err= 0: pid=1247519: Sat Jul 13 06:23:11 2024 00:28:05.636 read: IOPS=189, BW=760KiB/s (778kB/s)(7600KiB/10003msec) 00:28:05.636 slat (nsec): min=6082, max=61793, avg=8519.41, stdev=2988.31 00:28:05.636 clat (usec): min=707, max=46024, avg=21032.30, stdev=20155.61 00:28:05.636 lat (usec): min=714, max=46062, avg=21040.82, stdev=20155.55 00:28:05.636 clat percentiles (usec): 00:28:05.636 | 1.00th=[ 758], 5.00th=[ 783], 10.00th=[ 799], 20.00th=[ 840], 00:28:05.636 | 30.00th=[ 848], 40.00th=[ 865], 50.00th=[41157], 60.00th=[41157], 00:28:05.636 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:05.636 | 99.00th=[41157], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:28:05.636 | 99.99th=[45876] 00:28:05.636 bw ( KiB/s): min= 702, max= 768, per=66.58%, avg=761.16, stdev=20.50, samples=19 00:28:05.636 iops : min= 175, max= 192, avg=190.26, stdev= 5.21, samples=19 00:28:05.636 lat (usec) : 750=0.53%, 1000=48.95% 00:28:05.636 lat (msec) : 2=0.42%, 50=50.11% 00:28:05.636 cpu : usr=95.14%, sys=4.55%, ctx=19, majf=0, minf=211 00:28:05.636 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:05.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:05.636 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:05.636 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:05.636 00:28:05.636 Run status group 0 (all jobs): 00:28:05.636 READ: bw=1143KiB/s (1170kB/s), 385KiB/s-760KiB/s (394kB/s-778kB/s), io=11.2MiB (11.7MB), run=10003-10023msec 00:28:05.636 06:23:11 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:05.636 06:23:11 -- target/dif.sh@43 -- # local sub 00:28:05.636 06:23:11 -- target/dif.sh@45 -- # for sub in "$@" 00:28:05.636 06:23:11 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:05.636 06:23:11 -- target/dif.sh@36 -- # local sub_id=0 00:28:05.636 06:23:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:05.636 06:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.636 06:23:11 -- common/autotest_common.sh@10 -- # set +x 00:28:05.636 06:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.636 06:23:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:05.636 06:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.636 06:23:11 -- common/autotest_common.sh@10 -- # set +x 00:28:05.636 06:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.636 06:23:11 -- target/dif.sh@45 -- # for sub in "$@" 00:28:05.636 06:23:11 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:05.636 06:23:11 -- target/dif.sh@36 -- # local sub_id=1 00:28:05.636 06:23:11 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:05.636 06:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.636 06:23:11 -- common/autotest_common.sh@10 -- # set +x 00:28:05.636 06:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.636 06:23:11 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:05.636 06:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.636 06:23:11 -- common/autotest_common.sh@10 -- # set +x 00:28:05.636 06:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.636 00:28:05.636 real 0m11.289s 00:28:05.636 user 0m20.228s 00:28:05.636 sys 0m1.263s 00:28:05.636 06:23:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:05.636 06:23:11 -- common/autotest_common.sh@10 -- # set +x 00:28:05.636 ************************************ 00:28:05.636 END TEST fio_dif_1_multi_subsystems 00:28:05.636 ************************************ 00:28:05.636 06:23:11 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:05.636 06:23:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:05.636 06:23:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:05.636 06:23:11 -- common/autotest_common.sh@10 -- # set +x 00:28:05.636 ************************************ 00:28:05.636 START TEST fio_dif_rand_params 00:28:05.636 ************************************ 00:28:05.636 06:23:11 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:28:05.636 06:23:11 -- target/dif.sh@100 -- # local NULL_DIF 00:28:05.636 06:23:11 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:05.636 06:23:11 -- target/dif.sh@103 -- # NULL_DIF=3 00:28:05.636 06:23:11 -- target/dif.sh@103 -- # bs=128k 00:28:05.636 06:23:11 -- target/dif.sh@103 -- # numjobs=3 00:28:05.636 06:23:11 -- target/dif.sh@103 -- # iodepth=3 00:28:05.636 06:23:11 -- target/dif.sh@103 -- # runtime=5 00:28:05.636 06:23:11 -- target/dif.sh@105 -- # create_subsystems 0 00:28:05.636 06:23:11 -- target/dif.sh@28 -- # local sub 00:28:05.636 06:23:11 -- target/dif.sh@30 -- # for sub in "$@" 00:28:05.636 06:23:11 -- target/dif.sh@31 -- # create_subsystem 0 00:28:05.636 06:23:11 -- target/dif.sh@18 -- # local sub_id=0 00:28:05.636 06:23:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:05.636 06:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.636 06:23:11 -- common/autotest_common.sh@10 -- # set +x 00:28:05.636 bdev_null0 00:28:05.636 06:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.636 06:23:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:05.636 06:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.636 06:23:11 -- common/autotest_common.sh@10 -- # set +x 00:28:05.636 06:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.636 06:23:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:05.636 06:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.636 06:23:11 -- common/autotest_common.sh@10 -- # set +x 00:28:05.636 06:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.636 06:23:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:05.636 06:23:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:05.636 06:23:11 -- common/autotest_common.sh@10 -- # set +x 00:28:05.636 [2024-07-13 06:23:11.629443] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.636 06:23:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:05.636 06:23:11 -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:05.636 06:23:11 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:05.636 06:23:11 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:05.636 06:23:11 -- nvmf/common.sh@520 -- # config=() 00:28:05.636 06:23:11 -- nvmf/common.sh@520 -- # local subsystem config 00:28:05.636 06:23:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:05.636 06:23:11 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.636 06:23:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:05.636 { 00:28:05.637 "params": { 00:28:05.637 "name": "Nvme$subsystem", 00:28:05.637 "trtype": "$TEST_TRANSPORT", 00:28:05.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:05.637 "adrfam": "ipv4", 00:28:05.637 "trsvcid": "$NVMF_PORT", 00:28:05.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:05.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:05.637 "hdgst": ${hdgst:-false}, 00:28:05.637 "ddgst": ${ddgst:-false} 00:28:05.637 }, 00:28:05.637 "method": "bdev_nvme_attach_controller" 00:28:05.637 } 00:28:05.637 EOF 00:28:05.637 )") 00:28:05.637 06:23:11 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.637 06:23:11 -- target/dif.sh@82 -- # gen_fio_conf 00:28:05.637 06:23:11 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:05.637 06:23:11 -- target/dif.sh@54 -- # local file 00:28:05.637 06:23:11 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:05.637 06:23:11 -- target/dif.sh@56 -- # cat 00:28:05.637 06:23:11 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:05.637 06:23:11 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:05.637 06:23:11 -- common/autotest_common.sh@1320 -- # shift 00:28:05.637 06:23:11 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:05.637 06:23:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:05.637 06:23:11 -- nvmf/common.sh@542 -- # cat 00:28:05.637 06:23:11 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:05.637 06:23:11 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:05.637 06:23:11 -- target/dif.sh@72 -- # (( file <= files )) 00:28:05.637 06:23:11 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:05.637 06:23:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:05.637 06:23:11 -- nvmf/common.sh@544 -- # jq . 00:28:05.637 06:23:11 -- nvmf/common.sh@545 -- # IFS=, 00:28:05.637 06:23:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:05.637 "params": { 00:28:05.637 "name": "Nvme0", 00:28:05.637 "trtype": "tcp", 00:28:05.637 "traddr": "10.0.0.2", 00:28:05.637 "adrfam": "ipv4", 00:28:05.637 "trsvcid": "4420", 00:28:05.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:05.637 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:05.637 "hdgst": false, 00:28:05.637 "ddgst": false 00:28:05.637 }, 00:28:05.637 "method": "bdev_nvme_attach_controller" 00:28:05.637 }' 00:28:05.637 06:23:11 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:05.637 06:23:11 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:05.637 06:23:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:05.637 06:23:11 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:05.637 06:23:11 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:05.637 06:23:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:05.637 06:23:11 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:05.637 06:23:11 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:05.637 06:23:11 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:05.637 06:23:11 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:05.637 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:05.637 ... 00:28:05.637 fio-3.35 00:28:05.637 Starting 3 threads 00:28:05.637 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.896 [2024-07-13 06:23:12.288321] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:05.896 [2024-07-13 06:23:12.288396] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:11.157 00:28:11.157 filename0: (groupid=0, jobs=1): err= 0: pid=1248950: Sat Jul 13 06:23:17 2024 00:28:11.157 read: IOPS=213, BW=26.6MiB/s (27.9MB/s)(134MiB/5043msec) 00:28:11.157 slat (nsec): min=4390, max=41643, avg=14789.08, stdev=4459.33 00:28:11.157 clat (usec): min=4848, max=88399, avg=14014.45, stdev=13588.23 00:28:11.157 lat (usec): min=4859, max=88411, avg=14029.24, stdev=13588.35 00:28:11.157 clat percentiles (usec): 00:28:11.157 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5800], 20.00th=[ 7177], 00:28:11.157 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[10552], 00:28:11.157 | 70.00th=[11600], 80.00th=[12387], 90.00th=[48497], 95.00th=[50594], 00:28:11.157 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54264], 99.95th=[88605], 00:28:11.157 | 99.99th=[88605] 00:28:11.157 bw ( KiB/s): min=21760, max=34816, per=34.64%, avg=27468.80, stdev=4152.59, samples=10 00:28:11.157 iops : min= 170, max= 272, avg=214.60, stdev=32.44, samples=10 00:28:11.157 lat (msec) : 10=55.63%, 20=32.47%, 50=6.14%, 100=5.77% 00:28:11.157 cpu : usr=94.13%, sys=5.28%, ctx=11, majf=0, minf=69 00:28:11.157 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.157 issued rwts: total=1075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.157 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:11.157 filename0: (groupid=0, jobs=1): err= 0: pid=1248951: Sat Jul 13 06:23:17 2024 00:28:11.157 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(133MiB/5033msec) 00:28:11.157 slat (nsec): min=4490, max=54067, avg=14385.73, stdev=4029.75 00:28:11.157 clat (usec): min=4832, max=89203, avg=14183.50, stdev=13162.78 00:28:11.157 lat (usec): min=4844, max=89216, avg=14197.88, stdev=13162.79 00:28:11.157 clat percentiles (usec): 00:28:11.157 | 1.00th=[ 5473], 5.00th=[ 5866], 10.00th=[ 6652], 20.00th=[ 7963], 00:28:11.157 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9503], 60.00th=[10814], 00:28:11.157 | 70.00th=[11731], 80.00th=[12649], 90.00th=[47449], 95.00th=[50070], 00:28:11.157 | 99.00th=[53216], 99.50th=[53740], 99.90th=[54789], 99.95th=[89654], 00:28:11.157 | 99.99th=[89654] 00:28:11.157 bw ( KiB/s): min=19456, max=34816, per=34.23%, avg=27142.60, stdev=4407.63, samples=10 00:28:11.157 iops : min= 152, max= 272, avg=212.00, stdev=34.36, samples=10 00:28:11.157 lat (msec) : 10=53.81%, 20=35.00%, 50=5.74%, 100=5.46% 00:28:11.157 cpu : usr=95.51%, sys=4.05%, ctx=17, majf=0, minf=166 00:28:11.157 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.157 issued rwts: total=1063,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.157 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:11.157 filename0: (groupid=0, jobs=1): err= 0: pid=1248952: Sat Jul 13 06:23:17 2024 00:28:11.157 read: IOPS=195, BW=24.5MiB/s (25.7MB/s)(123MiB/5035msec) 00:28:11.157 slat (nsec): min=4577, max=35464, avg=14729.12, stdev=3599.06 00:28:11.157 clat (usec): min=4762, max=92781, avg=15299.29, stdev=15122.99 00:28:11.157 lat (usec): min=4775, max=92801, avg=15314.02, stdev=15123.14 00:28:11.157 clat percentiles (usec): 00:28:11.157 | 1.00th=[ 5211], 5.00th=[ 5604], 10.00th=[ 5932], 20.00th=[ 7570], 00:28:11.157 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9896], 60.00th=[11469], 00:28:11.157 | 70.00th=[12387], 80.00th=[13435], 90.00th=[50070], 95.00th=[52167], 00:28:11.157 | 99.00th=[54264], 99.50th=[88605], 99.90th=[92799], 99.95th=[92799], 00:28:11.157 | 99.99th=[92799] 00:28:11.157 bw ( KiB/s): min=17664, max=33024, per=31.74%, avg=25164.80, stdev=4965.56, samples=10 00:28:11.157 iops : min= 138, max= 258, avg=196.60, stdev=38.79, samples=10 00:28:11.157 lat (msec) : 10=50.81%, 20=36.11%, 50=3.55%, 100=9.53% 00:28:11.157 cpu : usr=94.89%, sys=4.41%, ctx=15, majf=0, minf=71 00:28:11.157 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:11.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.157 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:11.157 issued rwts: total=986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:11.157 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:11.157 00:28:11.157 Run status group 0 (all jobs): 00:28:11.157 READ: bw=77.4MiB/s (81.2MB/s), 24.5MiB/s-26.6MiB/s (25.7MB/s-27.9MB/s), io=391MiB (409MB), run=5033-5043msec 00:28:11.157 06:23:17 -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:11.157 06:23:17 -- target/dif.sh@43 -- # local sub 00:28:11.157 06:23:17 -- target/dif.sh@45 -- # for sub in "$@" 00:28:11.157 06:23:17 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:11.157 06:23:17 -- target/dif.sh@36 -- # local sub_id=0 00:28:11.157 06:23:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:11.157 06:23:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:11.157 06:23:17 -- common/autotest_common.sh@10 -- # set +x 00:28:11.157 06:23:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:11.157 06:23:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:11.157 06:23:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:11.157 06:23:17 -- common/autotest_common.sh@10 -- # set +x 00:28:11.417 06:23:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:11.417 06:23:17 -- target/dif.sh@109 -- # NULL_DIF=2 00:28:11.417 06:23:17 -- target/dif.sh@109 -- # bs=4k 00:28:11.417 06:23:17 -- target/dif.sh@109 -- # numjobs=8 00:28:11.417 06:23:17 -- target/dif.sh@109 -- # iodepth=16 00:28:11.417 06:23:17 -- target/dif.sh@109 -- # runtime= 00:28:11.417 06:23:17 -- target/dif.sh@109 -- # files=2 00:28:11.417 06:23:17 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:11.417 06:23:17 -- target/dif.sh@28 -- # local sub 00:28:11.417 06:23:17 -- target/dif.sh@30 -- # for sub in "$@" 00:28:11.417 06:23:17 -- target/dif.sh@31 -- # create_subsystem 0 00:28:11.417 06:23:17 -- target/dif.sh@18 -- # local sub_id=0 00:28:11.417 06:23:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:11.417 06:23:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:11.417 06:23:17 -- common/autotest_common.sh@10 -- # set +x 00:28:11.417 bdev_null0 00:28:11.417 06:23:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:11.417 06:23:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:11.417 06:23:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:11.417 06:23:17 -- common/autotest_common.sh@10 -- # set +x 00:28:11.417 06:23:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:11.417 06:23:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:11.417 06:23:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:11.417 06:23:17 -- common/autotest_common.sh@10 -- # set +x 00:28:11.417 06:23:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:11.417 06:23:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:11.417 06:23:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:11.417 06:23:17 -- common/autotest_common.sh@10 -- # set +x 00:28:11.417 [2024-07-13 06:23:17.699175] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:11.417 06:23:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:11.417 06:23:17 -- target/dif.sh@30 -- # for sub in "$@" 00:28:11.417 06:23:17 -- target/dif.sh@31 -- # create_subsystem 1 00:28:11.417 06:23:17 -- target/dif.sh@18 -- # local sub_id=1 00:28:11.417 06:23:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:11.417 06:23:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:11.417 06:23:17 -- common/autotest_common.sh@10 -- # set +x 00:28:11.417 bdev_null1 00:28:11.417 06:23:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:11.417 06:23:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:11.417 06:23:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:11.417 06:23:17 -- common/autotest_common.sh@10 -- # set +x 00:28:11.417 06:23:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:11.417 06:23:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:11.417 06:23:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:11.417 06:23:17 -- common/autotest_common.sh@10 -- # set +x 00:28:11.417 06:23:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:11.417 06:23:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:11.417 06:23:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:11.417 06:23:17 -- common/autotest_common.sh@10 -- # set +x 00:28:11.417 06:23:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:11.417 06:23:17 -- target/dif.sh@30 -- # for sub in "$@" 00:28:11.417 06:23:17 -- target/dif.sh@31 -- # create_subsystem 2 00:28:11.417 06:23:17 -- target/dif.sh@18 -- # local sub_id=2 00:28:11.417 06:23:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:11.417 06:23:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:11.417 06:23:17 -- common/autotest_common.sh@10 -- # set +x 00:28:11.417 bdev_null2 00:28:11.417 06:23:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:11.417 06:23:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:11.418 06:23:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:11.418 06:23:17 -- common/autotest_common.sh@10 -- # set +x 00:28:11.418 06:23:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:11.418 06:23:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:11.418 06:23:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:11.418 06:23:17 -- common/autotest_common.sh@10 -- # set +x 00:28:11.418 06:23:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:11.418 06:23:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:11.418 06:23:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:11.418 06:23:17 -- common/autotest_common.sh@10 -- # set +x 00:28:11.418 06:23:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:11.418 06:23:17 -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:11.418 06:23:17 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:11.418 06:23:17 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:11.418 06:23:17 -- nvmf/common.sh@520 -- # config=() 00:28:11.418 06:23:17 -- nvmf/common.sh@520 -- # local subsystem config 00:28:11.418 06:23:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:11.418 06:23:17 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:11.418 06:23:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:11.418 { 00:28:11.418 "params": { 00:28:11.418 "name": "Nvme$subsystem", 00:28:11.418 "trtype": "$TEST_TRANSPORT", 00:28:11.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.418 "adrfam": "ipv4", 00:28:11.418 "trsvcid": "$NVMF_PORT", 00:28:11.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.418 "hdgst": ${hdgst:-false}, 00:28:11.418 "ddgst": ${ddgst:-false} 00:28:11.418 }, 00:28:11.418 "method": "bdev_nvme_attach_controller" 00:28:11.418 } 00:28:11.418 EOF 00:28:11.418 )") 00:28:11.418 06:23:17 -- target/dif.sh@82 -- # gen_fio_conf 00:28:11.418 06:23:17 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:11.418 06:23:17 -- target/dif.sh@54 -- # local file 00:28:11.418 06:23:17 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:11.418 06:23:17 -- target/dif.sh@56 -- # cat 00:28:11.418 06:23:17 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:11.418 06:23:17 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:11.418 06:23:17 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:11.418 06:23:17 -- common/autotest_common.sh@1320 -- # shift 00:28:11.418 06:23:17 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:11.418 06:23:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:11.418 06:23:17 -- nvmf/common.sh@542 -- # cat 00:28:11.418 06:23:17 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:11.418 06:23:17 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:11.418 06:23:17 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:11.418 06:23:17 -- target/dif.sh@72 -- # (( file <= files )) 00:28:11.418 06:23:17 -- target/dif.sh@73 -- # cat 00:28:11.418 06:23:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:11.418 06:23:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:11.418 06:23:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:11.418 { 00:28:11.418 "params": { 00:28:11.418 "name": "Nvme$subsystem", 00:28:11.418 "trtype": "$TEST_TRANSPORT", 00:28:11.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.418 "adrfam": "ipv4", 00:28:11.418 "trsvcid": "$NVMF_PORT", 00:28:11.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.418 "hdgst": ${hdgst:-false}, 00:28:11.418 "ddgst": ${ddgst:-false} 00:28:11.418 }, 00:28:11.418 "method": "bdev_nvme_attach_controller" 00:28:11.418 } 00:28:11.418 EOF 00:28:11.418 )") 00:28:11.418 06:23:17 -- nvmf/common.sh@542 -- # cat 00:28:11.418 06:23:17 -- target/dif.sh@72 -- # (( file++ )) 00:28:11.418 06:23:17 -- target/dif.sh@72 -- # (( file <= files )) 00:28:11.418 06:23:17 -- target/dif.sh@73 -- # cat 00:28:11.418 06:23:17 -- target/dif.sh@72 -- # (( file++ )) 00:28:11.418 06:23:17 -- target/dif.sh@72 -- # (( file <= files )) 00:28:11.418 06:23:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:11.418 06:23:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:11.418 { 00:28:11.418 "params": { 00:28:11.418 "name": "Nvme$subsystem", 00:28:11.418 "trtype": "$TEST_TRANSPORT", 00:28:11.418 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:11.418 "adrfam": "ipv4", 00:28:11.418 "trsvcid": "$NVMF_PORT", 00:28:11.418 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:11.418 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:11.418 "hdgst": ${hdgst:-false}, 00:28:11.418 "ddgst": ${ddgst:-false} 00:28:11.418 }, 00:28:11.418 "method": "bdev_nvme_attach_controller" 00:28:11.418 } 00:28:11.418 EOF 00:28:11.418 )") 00:28:11.418 06:23:17 -- nvmf/common.sh@542 -- # cat 00:28:11.418 06:23:17 -- nvmf/common.sh@544 -- # jq . 00:28:11.418 06:23:17 -- nvmf/common.sh@545 -- # IFS=, 00:28:11.418 06:23:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:11.418 "params": { 00:28:11.418 "name": "Nvme0", 00:28:11.418 "trtype": "tcp", 00:28:11.418 "traddr": "10.0.0.2", 00:28:11.418 "adrfam": "ipv4", 00:28:11.418 "trsvcid": "4420", 00:28:11.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:11.418 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:11.418 "hdgst": false, 00:28:11.418 "ddgst": false 00:28:11.418 }, 00:28:11.418 "method": "bdev_nvme_attach_controller" 00:28:11.418 },{ 00:28:11.418 "params": { 00:28:11.418 "name": "Nvme1", 00:28:11.418 "trtype": "tcp", 00:28:11.418 "traddr": "10.0.0.2", 00:28:11.418 "adrfam": "ipv4", 00:28:11.418 "trsvcid": "4420", 00:28:11.418 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:11.418 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:11.418 "hdgst": false, 00:28:11.418 "ddgst": false 00:28:11.418 }, 00:28:11.418 "method": "bdev_nvme_attach_controller" 00:28:11.418 },{ 00:28:11.418 "params": { 00:28:11.418 "name": "Nvme2", 00:28:11.418 "trtype": "tcp", 00:28:11.418 "traddr": "10.0.0.2", 00:28:11.418 "adrfam": "ipv4", 00:28:11.418 "trsvcid": "4420", 00:28:11.418 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:11.418 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:11.418 "hdgst": false, 00:28:11.418 "ddgst": false 00:28:11.418 }, 00:28:11.418 "method": "bdev_nvme_attach_controller" 00:28:11.418 }' 00:28:11.418 06:23:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:11.418 06:23:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:11.418 06:23:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:11.418 06:23:17 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:11.418 06:23:17 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:11.418 06:23:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:11.418 06:23:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:11.418 06:23:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:11.418 06:23:17 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:11.418 06:23:17 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:11.677 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:11.677 ... 00:28:11.677 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:11.677 ... 00:28:11.677 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:11.677 ... 00:28:11.677 fio-3.35 00:28:11.677 Starting 24 threads 00:28:11.677 EAL: No free 2048 kB hugepages reported on node 1 00:28:12.611 [2024-07-13 06:23:18.817800] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:12.611 [2024-07-13 06:23:18.817901] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:24.807 00:28:24.807 filename0: (groupid=0, jobs=1): err= 0: pid=1249837: Sat Jul 13 06:23:29 2024 00:28:24.807 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10126msec) 00:28:24.807 slat (usec): min=8, max=104, avg=28.57, stdev=16.86 00:28:24.807 clat (msec): min=207, max=473, avg=297.58, stdev=42.99 00:28:24.807 lat (msec): min=207, max=473, avg=297.61, stdev=42.98 00:28:24.807 clat percentiles (msec): 00:28:24.807 | 1.00th=[ 211], 5.00th=[ 213], 10.00th=[ 245], 20.00th=[ 271], 00:28:24.807 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 296], 60.00th=[ 305], 00:28:24.807 | 70.00th=[ 309], 80.00th=[ 317], 90.00th=[ 338], 95.00th=[ 384], 00:28:24.807 | 99.00th=[ 418], 99.50th=[ 439], 99.90th=[ 472], 99.95th=[ 472], 00:28:24.807 | 99.99th=[ 472] 00:28:24.807 bw ( KiB/s): min= 128, max= 368, per=3.95%, avg=211.20, stdev=71.10, samples=20 00:28:24.807 iops : min= 32, max= 92, avg=52.80, stdev=17.78, samples=20 00:28:24.807 lat (msec) : 250=11.76%, 500=88.24% 00:28:24.807 cpu : usr=98.59%, sys=0.96%, ctx=23, majf=0, minf=44 00:28:24.807 IO depths : 1=2.9%, 2=9.2%, 4=25.0%, 8=53.3%, 16=9.6%, 32=0.0%, >=64=0.0% 00:28:24.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.807 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.807 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.807 filename0: (groupid=0, jobs=1): err= 0: pid=1249838: Sat Jul 13 06:23:29 2024 00:28:24.807 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10132msec) 00:28:24.807 slat (usec): min=5, max=120, avg=31.81, stdev=21.67 00:28:24.807 clat (msec): min=208, max=417, avg=297.69, stdev=44.81 00:28:24.807 lat (msec): min=208, max=417, avg=297.72, stdev=44.80 00:28:24.807 clat percentiles (msec): 00:28:24.807 | 1.00th=[ 209], 5.00th=[ 211], 10.00th=[ 234], 20.00th=[ 271], 00:28:24.807 | 30.00th=[ 288], 40.00th=[ 296], 50.00th=[ 300], 60.00th=[ 300], 00:28:24.807 | 70.00th=[ 313], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 380], 00:28:24.807 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 418], 99.95th=[ 418], 00:28:24.807 | 99.99th=[ 418] 00:28:24.807 bw ( KiB/s): min= 128, max= 256, per=3.95%, avg=211.20, stdev=62.64, samples=20 00:28:24.807 iops : min= 32, max= 64, avg=52.80, stdev=15.66, samples=20 00:28:24.807 lat (msec) : 250=17.65%, 500=82.35% 00:28:24.807 cpu : usr=98.21%, sys=1.15%, ctx=39, majf=0, minf=42 00:28:24.807 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:24.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.807 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.807 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.807 filename0: (groupid=0, jobs=1): err= 0: pid=1249839: Sat Jul 13 06:23:29 2024 00:28:24.807 read: IOPS=56, BW=227KiB/s (232kB/s)(2304KiB/10159msec) 00:28:24.807 slat (usec): min=4, max=138, avg=39.65, stdev=25.21 00:28:24.807 clat (msec): min=144, max=444, avg=281.80, stdev=39.85 00:28:24.807 lat (msec): min=144, max=444, avg=281.84, stdev=39.86 00:28:24.807 clat percentiles (msec): 00:28:24.807 | 1.00th=[ 176], 5.00th=[ 209], 10.00th=[ 218], 20.00th=[ 257], 00:28:24.807 | 30.00th=[ 264], 40.00th=[ 284], 50.00th=[ 292], 60.00th=[ 300], 00:28:24.807 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 326], 95.00th=[ 338], 00:28:24.807 | 99.00th=[ 393], 99.50th=[ 426], 99.90th=[ 447], 99.95th=[ 447], 00:28:24.807 | 99.99th=[ 447] 00:28:24.807 bw ( KiB/s): min= 128, max= 272, per=4.17%, avg=224.00, stdev=55.43, samples=20 00:28:24.807 iops : min= 32, max= 68, avg=56.00, stdev=13.86, samples=20 00:28:24.807 lat (msec) : 250=19.44%, 500=80.56% 00:28:24.807 cpu : usr=97.94%, sys=1.39%, ctx=37, majf=0, minf=39 00:28:24.807 IO depths : 1=3.5%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:28:24.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.807 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.807 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.807 filename0: (groupid=0, jobs=1): err= 0: pid=1249840: Sat Jul 13 06:23:29 2024 00:28:24.807 read: IOPS=53, BW=214KiB/s (219kB/s)(2168KiB/10142msec) 00:28:24.807 slat (usec): min=7, max=126, avg=81.79, stdev=21.76 00:28:24.807 clat (msec): min=123, max=515, avg=298.55, stdev=57.74 00:28:24.807 lat (msec): min=123, max=515, avg=298.63, stdev=57.75 00:28:24.807 clat percentiles (msec): 00:28:24.807 | 1.00th=[ 136], 5.00th=[ 209], 10.00th=[ 215], 20.00th=[ 262], 00:28:24.807 | 30.00th=[ 284], 40.00th=[ 296], 50.00th=[ 305], 60.00th=[ 305], 00:28:24.807 | 70.00th=[ 321], 80.00th=[ 330], 90.00th=[ 338], 95.00th=[ 418], 00:28:24.807 | 99.00th=[ 493], 99.50th=[ 510], 99.90th=[ 514], 99.95th=[ 514], 00:28:24.807 | 99.99th=[ 514] 00:28:24.807 bw ( KiB/s): min= 128, max= 256, per=3.93%, avg=210.40, stdev=60.60, samples=20 00:28:24.807 iops : min= 32, max= 64, avg=52.60, stdev=15.15, samples=20 00:28:24.807 lat (msec) : 250=18.82%, 500=80.44%, 750=0.74% 00:28:24.807 cpu : usr=98.24%, sys=0.98%, ctx=18, majf=0, minf=32 00:28:24.807 IO depths : 1=3.1%, 2=9.4%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:28:24.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.807 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.807 issued rwts: total=542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.807 filename0: (groupid=0, jobs=1): err= 0: pid=1249841: Sat Jul 13 06:23:29 2024 00:28:24.807 read: IOPS=58, BW=233KiB/s (239kB/s)(2368KiB/10144msec) 00:28:24.807 slat (nsec): min=4071, max=54313, avg=24712.00, stdev=10534.80 00:28:24.807 clat (msec): min=139, max=484, avg=273.95, stdev=46.78 00:28:24.807 lat (msec): min=139, max=484, avg=273.97, stdev=46.79 00:28:24.807 clat percentiles (msec): 00:28:24.807 | 1.00th=[ 142], 5.00th=[ 146], 10.00th=[ 211], 20.00th=[ 245], 00:28:24.807 | 30.00th=[ 259], 40.00th=[ 275], 50.00th=[ 288], 60.00th=[ 296], 00:28:24.807 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 317], 95.00th=[ 338], 00:28:24.807 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 485], 99.95th=[ 485], 00:28:24.807 | 99.99th=[ 485] 00:28:24.807 bw ( KiB/s): min= 128, max= 384, per=4.30%, avg=230.40, stdev=65.54, samples=20 00:28:24.807 iops : min= 32, max= 96, avg=57.60, stdev=16.38, samples=20 00:28:24.807 lat (msec) : 250=25.68%, 500=74.32% 00:28:24.807 cpu : usr=98.53%, sys=1.05%, ctx=20, majf=0, minf=47 00:28:24.807 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:28:24.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.807 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.807 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.807 filename0: (groupid=0, jobs=1): err= 0: pid=1249842: Sat Jul 13 06:23:29 2024 00:28:24.807 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10093msec) 00:28:24.807 slat (nsec): min=7955, max=93230, avg=24629.99, stdev=10023.63 00:28:24.807 clat (msec): min=127, max=493, avg=305.61, stdev=54.99 00:28:24.807 lat (msec): min=128, max=493, avg=305.64, stdev=54.99 00:28:24.807 clat percentiles (msec): 00:28:24.807 | 1.00th=[ 213], 5.00th=[ 213], 10.00th=[ 220], 20.00th=[ 288], 00:28:24.807 | 30.00th=[ 292], 40.00th=[ 300], 50.00th=[ 305], 60.00th=[ 309], 00:28:24.807 | 70.00th=[ 317], 80.00th=[ 317], 90.00th=[ 368], 95.00th=[ 451], 00:28:24.807 | 99.00th=[ 460], 99.50th=[ 460], 99.90th=[ 493], 99.95th=[ 493], 00:28:24.807 | 99.99th=[ 493] 00:28:24.807 bw ( KiB/s): min= 128, max= 384, per=3.82%, avg=204.80, stdev=72.79, samples=20 00:28:24.807 iops : min= 32, max= 96, avg=51.20, stdev=18.20, samples=20 00:28:24.807 lat (msec) : 250=13.26%, 500=86.74% 00:28:24.807 cpu : usr=98.55%, sys=0.99%, ctx=42, majf=0, minf=37 00:28:24.807 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:28:24.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.807 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.807 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.807 filename0: (groupid=0, jobs=1): err= 0: pid=1249843: Sat Jul 13 06:23:29 2024 00:28:24.807 read: IOPS=58, BW=233KiB/s (239kB/s)(2368KiB/10161msec) 00:28:24.807 slat (usec): min=8, max=127, avg=33.19, stdev=24.22 00:28:24.807 clat (msec): min=125, max=456, avg=274.26, stdev=47.29 00:28:24.807 lat (msec): min=125, max=457, avg=274.29, stdev=47.30 00:28:24.807 clat percentiles (msec): 00:28:24.807 | 1.00th=[ 161], 5.00th=[ 190], 10.00th=[ 211], 20.00th=[ 236], 00:28:24.807 | 30.00th=[ 259], 40.00th=[ 275], 50.00th=[ 284], 60.00th=[ 296], 00:28:24.807 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 313], 95.00th=[ 338], 00:28:24.807 | 99.00th=[ 435], 99.50th=[ 451], 99.90th=[ 456], 99.95th=[ 456], 00:28:24.807 | 99.99th=[ 456] 00:28:24.807 bw ( KiB/s): min= 128, max= 368, per=4.30%, avg=230.40, stdev=62.38, samples=20 00:28:24.807 iops : min= 32, max= 92, avg=57.60, stdev=15.59, samples=20 00:28:24.807 lat (msec) : 250=25.00%, 500=75.00% 00:28:24.807 cpu : usr=98.70%, sys=0.92%, ctx=17, majf=0, minf=53 00:28:24.807 IO depths : 1=2.7%, 2=9.0%, 4=25.0%, 8=53.5%, 16=9.8%, 32=0.0%, >=64=0.0% 00:28:24.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.807 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.807 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.807 filename0: (groupid=0, jobs=1): err= 0: pid=1249844: Sat Jul 13 06:23:29 2024 00:28:24.807 read: IOPS=58, BW=233KiB/s (239kB/s)(2368KiB/10159msec) 00:28:24.807 slat (usec): min=5, max=164, avg=41.23, stdev=32.02 00:28:24.807 clat (msec): min=70, max=443, avg=273.43, stdev=59.34 00:28:24.807 lat (msec): min=70, max=443, avg=273.47, stdev=59.34 00:28:24.807 clat percentiles (msec): 00:28:24.807 | 1.00th=[ 71], 5.00th=[ 123], 10.00th=[ 211], 20.00th=[ 236], 00:28:24.807 | 30.00th=[ 259], 40.00th=[ 279], 50.00th=[ 292], 60.00th=[ 300], 00:28:24.807 | 70.00th=[ 305], 80.00th=[ 305], 90.00th=[ 321], 95.00th=[ 338], 00:28:24.807 | 99.00th=[ 414], 99.50th=[ 426], 99.90th=[ 443], 99.95th=[ 443], 00:28:24.807 | 99.99th=[ 443] 00:28:24.807 bw ( KiB/s): min= 128, max= 384, per=4.30%, avg=230.40, stdev=64.08, samples=20 00:28:24.807 iops : min= 32, max= 96, avg=57.60, stdev=16.02, samples=20 00:28:24.807 lat (msec) : 100=2.70%, 250=22.30%, 500=75.00% 00:28:24.807 cpu : usr=98.18%, sys=1.03%, ctx=63, majf=0, minf=52 00:28:24.807 IO depths : 1=2.7%, 2=9.0%, 4=25.0%, 8=53.5%, 16=9.8%, 32=0.0%, >=64=0.0% 00:28:24.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.807 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.808 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.808 filename1: (groupid=0, jobs=1): err= 0: pid=1249845: Sat Jul 13 06:23:29 2024 00:28:24.808 read: IOPS=58, BW=233KiB/s (239kB/s)(2368KiB/10161msec) 00:28:24.808 slat (usec): min=4, max=162, avg=36.47, stdev=26.08 00:28:24.808 clat (msec): min=120, max=462, avg=274.22, stdev=52.48 00:28:24.808 lat (msec): min=120, max=462, avg=274.26, stdev=52.48 00:28:24.808 clat percentiles (msec): 00:28:24.808 | 1.00th=[ 126], 5.00th=[ 163], 10.00th=[ 211], 20.00th=[ 230], 00:28:24.808 | 30.00th=[ 259], 40.00th=[ 279], 50.00th=[ 292], 60.00th=[ 296], 00:28:24.808 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 321], 95.00th=[ 338], 00:28:24.808 | 99.00th=[ 384], 99.50th=[ 405], 99.90th=[ 464], 99.95th=[ 464], 00:28:24.808 | 99.99th=[ 464] 00:28:24.808 bw ( KiB/s): min= 128, max= 256, per=4.30%, avg=230.40, stdev=52.53, samples=20 00:28:24.808 iops : min= 32, max= 64, avg=57.60, stdev=13.13, samples=20 00:28:24.808 lat (msec) : 250=25.34%, 500=74.66% 00:28:24.808 cpu : usr=97.71%, sys=1.57%, ctx=27, majf=0, minf=46 00:28:24.808 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:28:24.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.808 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.808 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.808 filename1: (groupid=0, jobs=1): err= 0: pid=1249846: Sat Jul 13 06:23:29 2024 00:28:24.808 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10128msec) 00:28:24.808 slat (usec): min=5, max=101, avg=23.79, stdev=11.00 00:28:24.808 clat (msec): min=125, max=507, avg=305.77, stdev=57.51 00:28:24.808 lat (msec): min=125, max=507, avg=305.79, stdev=57.50 00:28:24.808 clat percentiles (msec): 00:28:24.808 | 1.00th=[ 174], 5.00th=[ 213], 10.00th=[ 220], 20.00th=[ 288], 00:28:24.808 | 30.00th=[ 292], 40.00th=[ 300], 50.00th=[ 305], 60.00th=[ 309], 00:28:24.808 | 70.00th=[ 317], 80.00th=[ 317], 90.00th=[ 376], 95.00th=[ 451], 00:28:24.808 | 99.00th=[ 481], 99.50th=[ 493], 99.90th=[ 506], 99.95th=[ 506], 00:28:24.808 | 99.99th=[ 506] 00:28:24.808 bw ( KiB/s): min= 128, max= 384, per=3.82%, avg=204.80, stdev=76.75, samples=20 00:28:24.808 iops : min= 32, max= 96, avg=51.20, stdev=19.19, samples=20 00:28:24.808 lat (msec) : 250=13.64%, 500=85.98%, 750=0.38% 00:28:24.808 cpu : usr=98.36%, sys=1.09%, ctx=19, majf=0, minf=47 00:28:24.808 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:28:24.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.808 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.808 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.808 filename1: (groupid=0, jobs=1): err= 0: pid=1249847: Sat Jul 13 06:23:29 2024 00:28:24.808 read: IOPS=53, BW=214KiB/s (219kB/s)(2168KiB/10129msec) 00:28:24.808 slat (usec): min=7, max=128, avg=38.38, stdev=29.36 00:28:24.808 clat (msec): min=119, max=517, avg=298.53, stdev=66.16 00:28:24.808 lat (msec): min=119, max=517, avg=298.57, stdev=66.16 00:28:24.808 clat percentiles (msec): 00:28:24.808 | 1.00th=[ 130], 5.00th=[ 163], 10.00th=[ 213], 20.00th=[ 251], 00:28:24.808 | 30.00th=[ 292], 40.00th=[ 300], 50.00th=[ 305], 60.00th=[ 309], 00:28:24.808 | 70.00th=[ 317], 80.00th=[ 334], 90.00th=[ 384], 95.00th=[ 422], 00:28:24.808 | 99.00th=[ 498], 99.50th=[ 514], 99.90th=[ 518], 99.95th=[ 518], 00:28:24.808 | 99.99th=[ 518] 00:28:24.808 bw ( KiB/s): min= 128, max= 384, per=3.93%, avg=210.40, stdev=72.17, samples=20 00:28:24.808 iops : min= 32, max= 96, avg=52.60, stdev=18.04, samples=20 00:28:24.808 lat (msec) : 250=19.93%, 500=79.34%, 750=0.74% 00:28:24.808 cpu : usr=98.49%, sys=1.00%, ctx=16, majf=0, minf=37 00:28:24.808 IO depths : 1=3.0%, 2=9.2%, 4=25.1%, 8=53.3%, 16=9.4%, 32=0.0%, >=64=0.0% 00:28:24.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.808 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.808 issued rwts: total=542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.808 filename1: (groupid=0, jobs=1): err= 0: pid=1249848: Sat Jul 13 06:23:29 2024 00:28:24.808 read: IOPS=61, BW=245KiB/s (251kB/s)(2496KiB/10171msec) 00:28:24.808 slat (nsec): min=5534, max=47488, avg=22441.54, stdev=7378.88 00:28:24.808 clat (msec): min=6, max=374, avg=259.68, stdev=77.26 00:28:24.808 lat (msec): min=6, max=374, avg=259.70, stdev=77.26 00:28:24.808 clat percentiles (msec): 00:28:24.808 | 1.00th=[ 7], 5.00th=[ 56], 10.00th=[ 188], 20.00th=[ 215], 00:28:24.808 | 30.00th=[ 251], 40.00th=[ 275], 50.00th=[ 288], 60.00th=[ 300], 00:28:24.808 | 70.00th=[ 305], 80.00th=[ 305], 90.00th=[ 321], 95.00th=[ 330], 00:28:24.808 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 376], 99.95th=[ 376], 00:28:24.808 | 99.99th=[ 376] 00:28:24.808 bw ( KiB/s): min= 128, max= 640, per=4.54%, avg=243.20, stdev=107.34, samples=20 00:28:24.808 iops : min= 32, max= 160, avg=60.80, stdev=26.84, samples=20 00:28:24.808 lat (msec) : 10=2.56%, 20=2.24%, 100=2.88%, 250=20.83%, 500=71.47% 00:28:24.808 cpu : usr=97.99%, sys=1.34%, ctx=29, majf=0, minf=43 00:28:24.808 IO depths : 1=4.6%, 2=10.6%, 4=23.7%, 8=53.0%, 16=8.0%, 32=0.0%, >=64=0.0% 00:28:24.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.808 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.808 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.808 filename1: (groupid=0, jobs=1): err= 0: pid=1249849: Sat Jul 13 06:23:29 2024 00:28:24.808 read: IOPS=52, BW=208KiB/s (213kB/s)(2112KiB/10136msec) 00:28:24.808 slat (usec): min=11, max=129, avg=82.53, stdev=19.85 00:28:24.808 clat (msec): min=120, max=512, avg=305.57, stdev=62.52 00:28:24.808 lat (msec): min=120, max=512, avg=305.65, stdev=62.53 00:28:24.808 clat percentiles (msec): 00:28:24.808 | 1.00th=[ 142], 5.00th=[ 213], 10.00th=[ 220], 20.00th=[ 279], 00:28:24.808 | 30.00th=[ 292], 40.00th=[ 300], 50.00th=[ 305], 60.00th=[ 309], 00:28:24.808 | 70.00th=[ 317], 80.00th=[ 321], 90.00th=[ 384], 95.00th=[ 451], 00:28:24.808 | 99.00th=[ 481], 99.50th=[ 489], 99.90th=[ 514], 99.95th=[ 514], 00:28:24.808 | 99.99th=[ 514] 00:28:24.808 bw ( KiB/s): min= 128, max= 384, per=3.82%, avg=204.80, stdev=72.79, samples=20 00:28:24.808 iops : min= 32, max= 96, avg=51.20, stdev=18.20, samples=20 00:28:24.808 lat (msec) : 250=14.77%, 500=84.85%, 750=0.38% 00:28:24.808 cpu : usr=97.89%, sys=1.29%, ctx=27, majf=0, minf=51 00:28:24.808 IO depths : 1=3.6%, 2=9.8%, 4=25.0%, 8=52.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:28:24.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.808 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.808 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.808 filename1: (groupid=0, jobs=1): err= 0: pid=1249850: Sat Jul 13 06:23:29 2024 00:28:24.808 read: IOPS=56, BW=227KiB/s (232kB/s)(2304KiB/10157msec) 00:28:24.808 slat (nsec): min=3814, max=88638, avg=33597.51, stdev=15195.58 00:28:24.808 clat (msec): min=208, max=363, avg=281.83, stdev=33.65 00:28:24.808 lat (msec): min=208, max=363, avg=281.87, stdev=33.65 00:28:24.808 clat percentiles (msec): 00:28:24.808 | 1.00th=[ 209], 5.00th=[ 211], 10.00th=[ 236], 20.00th=[ 257], 00:28:24.808 | 30.00th=[ 264], 40.00th=[ 279], 50.00th=[ 288], 60.00th=[ 296], 00:28:24.808 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 326], 95.00th=[ 330], 00:28:24.808 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 363], 99.95th=[ 363], 00:28:24.808 | 99.99th=[ 363] 00:28:24.808 bw ( KiB/s): min= 128, max= 272, per=4.17%, avg=224.00, stdev=57.10, samples=20 00:28:24.808 iops : min= 32, max= 68, avg=56.00, stdev=14.28, samples=20 00:28:24.808 lat (msec) : 250=18.06%, 500=81.94% 00:28:24.808 cpu : usr=98.18%, sys=1.28%, ctx=21, majf=0, minf=36 00:28:24.808 IO depths : 1=5.0%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:28:24.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.808 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.808 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.808 filename1: (groupid=0, jobs=1): err= 0: pid=1249851: Sat Jul 13 06:23:29 2024 00:28:24.808 read: IOPS=56, BW=227KiB/s (232kB/s)(2304KiB/10160msec) 00:28:24.808 slat (usec): min=5, max=177, avg=59.83, stdev=32.98 00:28:24.808 clat (msec): min=70, max=443, avg=281.68, stdev=60.17 00:28:24.808 lat (msec): min=70, max=443, avg=281.74, stdev=60.19 00:28:24.808 clat percentiles (msec): 00:28:24.808 | 1.00th=[ 71], 5.00th=[ 123], 10.00th=[ 211], 20.00th=[ 259], 00:28:24.808 | 30.00th=[ 275], 40.00th=[ 292], 50.00th=[ 296], 60.00th=[ 300], 00:28:24.808 | 70.00th=[ 305], 80.00th=[ 317], 90.00th=[ 342], 95.00th=[ 347], 00:28:24.808 | 99.00th=[ 380], 99.50th=[ 409], 99.90th=[ 443], 99.95th=[ 443], 00:28:24.808 | 99.99th=[ 443] 00:28:24.808 bw ( KiB/s): min= 128, max= 384, per=4.17%, avg=224.00, stdev=70.42, samples=20 00:28:24.808 iops : min= 32, max= 96, avg=56.00, stdev=17.60, samples=20 00:28:24.808 lat (msec) : 100=2.78%, 250=14.24%, 500=82.99% 00:28:24.808 cpu : usr=97.79%, sys=1.48%, ctx=39, majf=0, minf=60 00:28:24.808 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:28:24.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.808 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.808 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.808 filename1: (groupid=0, jobs=1): err= 0: pid=1249852: Sat Jul 13 06:23:29 2024 00:28:24.808 read: IOPS=56, BW=227KiB/s (233kB/s)(2304KiB/10130msec) 00:28:24.808 slat (usec): min=8, max=128, avg=46.06, stdev=32.30 00:28:24.808 clat (msec): min=150, max=451, avg=280.99, stdev=37.13 00:28:24.808 lat (msec): min=150, max=451, avg=281.03, stdev=37.13 00:28:24.808 clat percentiles (msec): 00:28:24.808 | 1.00th=[ 209], 5.00th=[ 211], 10.00th=[ 218], 20.00th=[ 249], 00:28:24.808 | 30.00th=[ 264], 40.00th=[ 284], 50.00th=[ 292], 60.00th=[ 300], 00:28:24.808 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 321], 95.00th=[ 330], 00:28:24.808 | 99.00th=[ 347], 99.50th=[ 372], 99.90th=[ 451], 99.95th=[ 451], 00:28:24.808 | 99.99th=[ 451] 00:28:24.808 bw ( KiB/s): min= 128, max= 256, per=4.17%, avg=224.00, stdev=56.87, samples=20 00:28:24.808 iops : min= 32, max= 64, avg=56.00, stdev=14.22, samples=20 00:28:24.808 lat (msec) : 250=22.92%, 500=77.08% 00:28:24.808 cpu : usr=98.24%, sys=1.17%, ctx=39, majf=0, minf=36 00:28:24.808 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:28:24.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.808 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.808 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.808 filename2: (groupid=0, jobs=1): err= 0: pid=1249853: Sat Jul 13 06:23:29 2024 00:28:24.808 read: IOPS=56, BW=227KiB/s (232kB/s)(2304KiB/10154msec) 00:28:24.809 slat (nsec): min=8133, max=64941, avg=27300.26, stdev=11436.02 00:28:24.809 clat (msec): min=144, max=467, avg=281.81, stdev=40.50 00:28:24.809 lat (msec): min=144, max=467, avg=281.83, stdev=40.51 00:28:24.809 clat percentiles (msec): 00:28:24.809 | 1.00th=[ 155], 5.00th=[ 211], 10.00th=[ 218], 20.00th=[ 251], 00:28:24.809 | 30.00th=[ 264], 40.00th=[ 284], 50.00th=[ 292], 60.00th=[ 300], 00:28:24.809 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 321], 95.00th=[ 330], 00:28:24.809 | 99.00th=[ 397], 99.50th=[ 447], 99.90th=[ 468], 99.95th=[ 468], 00:28:24.809 | 99.99th=[ 468] 00:28:24.809 bw ( KiB/s): min= 128, max= 272, per=4.17%, avg=224.00, stdev=57.10, samples=20 00:28:24.809 iops : min= 32, max= 68, avg=56.00, stdev=14.28, samples=20 00:28:24.809 lat (msec) : 250=18.06%, 500=81.94% 00:28:24.809 cpu : usr=98.64%, sys=0.98%, ctx=20, majf=0, minf=45 00:28:24.809 IO depths : 1=4.2%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.3%, 32=0.0%, >=64=0.0% 00:28:24.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.809 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.809 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.809 filename2: (groupid=0, jobs=1): err= 0: pid=1249854: Sat Jul 13 06:23:29 2024 00:28:24.809 read: IOPS=56, BW=227KiB/s (233kB/s)(2304KiB/10130msec) 00:28:24.809 slat (usec): min=8, max=194, avg=54.97, stdev=36.90 00:28:24.809 clat (msec): min=175, max=390, avg=280.89, stdev=37.26 00:28:24.809 lat (msec): min=175, max=390, avg=280.95, stdev=37.27 00:28:24.809 clat percentiles (msec): 00:28:24.809 | 1.00th=[ 209], 5.00th=[ 211], 10.00th=[ 218], 20.00th=[ 249], 00:28:24.809 | 30.00th=[ 264], 40.00th=[ 284], 50.00th=[ 296], 60.00th=[ 300], 00:28:24.809 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 326], 95.00th=[ 334], 00:28:24.809 | 99.00th=[ 338], 99.50th=[ 342], 99.90th=[ 393], 99.95th=[ 393], 00:28:24.809 | 99.99th=[ 393] 00:28:24.809 bw ( KiB/s): min= 128, max= 256, per=4.17%, avg=224.00, stdev=56.87, samples=20 00:28:24.809 iops : min= 32, max= 64, avg=56.00, stdev=14.22, samples=20 00:28:24.809 lat (msec) : 250=22.57%, 500=77.43% 00:28:24.809 cpu : usr=96.25%, sys=2.05%, ctx=204, majf=0, minf=35 00:28:24.809 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:28:24.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.809 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.809 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.809 filename2: (groupid=0, jobs=1): err= 0: pid=1249855: Sat Jul 13 06:23:29 2024 00:28:24.809 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10130msec) 00:28:24.809 slat (usec): min=8, max=108, avg=30.21, stdev=23.32 00:28:24.809 clat (msec): min=135, max=429, avg=297.67, stdev=47.84 00:28:24.809 lat (msec): min=135, max=429, avg=297.70, stdev=47.83 00:28:24.809 clat percentiles (msec): 00:28:24.809 | 1.00th=[ 161], 5.00th=[ 213], 10.00th=[ 218], 20.00th=[ 266], 00:28:24.809 | 30.00th=[ 292], 40.00th=[ 300], 50.00th=[ 305], 60.00th=[ 309], 00:28:24.809 | 70.00th=[ 313], 80.00th=[ 321], 90.00th=[ 338], 95.00th=[ 384], 00:28:24.809 | 99.00th=[ 422], 99.50th=[ 422], 99.90th=[ 430], 99.95th=[ 430], 00:28:24.809 | 99.99th=[ 430] 00:28:24.809 bw ( KiB/s): min= 128, max= 368, per=3.95%, avg=211.20, stdev=73.89, samples=20 00:28:24.809 iops : min= 32, max= 92, avg=52.80, stdev=18.47, samples=20 00:28:24.809 lat (msec) : 250=17.65%, 500=82.35% 00:28:24.809 cpu : usr=98.08%, sys=1.21%, ctx=31, majf=0, minf=44 00:28:24.809 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:28:24.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.809 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.809 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.809 filename2: (groupid=0, jobs=1): err= 0: pid=1249856: Sat Jul 13 06:23:29 2024 00:28:24.809 read: IOPS=58, BW=233KiB/s (239kB/s)(2368KiB/10160msec) 00:28:24.809 slat (usec): min=4, max=118, avg=48.62, stdev=32.23 00:28:24.809 clat (msec): min=71, max=471, avg=273.39, stdev=63.79 00:28:24.809 lat (msec): min=71, max=471, avg=273.44, stdev=63.80 00:28:24.809 clat percentiles (msec): 00:28:24.809 | 1.00th=[ 72], 5.00th=[ 123], 10.00th=[ 211], 20.00th=[ 226], 00:28:24.809 | 30.00th=[ 259], 40.00th=[ 279], 50.00th=[ 292], 60.00th=[ 300], 00:28:24.809 | 70.00th=[ 305], 80.00th=[ 309], 90.00th=[ 330], 95.00th=[ 338], 00:28:24.809 | 99.00th=[ 418], 99.50th=[ 426], 99.90th=[ 472], 99.95th=[ 472], 00:28:24.809 | 99.99th=[ 472] 00:28:24.809 bw ( KiB/s): min= 128, max= 384, per=4.30%, avg=230.40, stdev=74.94, samples=20 00:28:24.809 iops : min= 32, max= 96, avg=57.60, stdev=18.73, samples=20 00:28:24.809 lat (msec) : 100=2.70%, 250=20.95%, 500=76.35% 00:28:24.809 cpu : usr=98.53%, sys=1.03%, ctx=17, majf=0, minf=54 00:28:24.809 IO depths : 1=3.7%, 2=10.0%, 4=25.0%, 8=52.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:28:24.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.809 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.809 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.809 filename2: (groupid=0, jobs=1): err= 0: pid=1249857: Sat Jul 13 06:23:29 2024 00:28:24.809 read: IOPS=58, BW=233KiB/s (239kB/s)(2368KiB/10157msec) 00:28:24.809 slat (nsec): min=7935, max=64093, avg=24791.74, stdev=10111.59 00:28:24.809 clat (msec): min=159, max=427, avg=274.29, stdev=42.55 00:28:24.809 lat (msec): min=159, max=427, avg=274.32, stdev=42.56 00:28:24.809 clat percentiles (msec): 00:28:24.809 | 1.00th=[ 171], 5.00th=[ 190], 10.00th=[ 211], 20.00th=[ 241], 00:28:24.809 | 30.00th=[ 259], 40.00th=[ 275], 50.00th=[ 284], 60.00th=[ 292], 00:28:24.809 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 321], 95.00th=[ 330], 00:28:24.809 | 99.00th=[ 380], 99.50th=[ 418], 99.90th=[ 426], 99.95th=[ 426], 00:28:24.809 | 99.99th=[ 426] 00:28:24.809 bw ( KiB/s): min= 128, max= 384, per=4.30%, avg=230.40, stdev=66.96, samples=20 00:28:24.809 iops : min= 32, max= 96, avg=57.60, stdev=16.74, samples=20 00:28:24.809 lat (msec) : 250=25.34%, 500=74.66% 00:28:24.809 cpu : usr=98.72%, sys=0.86%, ctx=16, majf=0, minf=35 00:28:24.809 IO depths : 1=5.1%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:28:24.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.809 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.809 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.809 filename2: (groupid=0, jobs=1): err= 0: pid=1249858: Sat Jul 13 06:23:29 2024 00:28:24.809 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10091msec) 00:28:24.809 slat (nsec): min=7836, max=51184, avg=23060.08, stdev=10155.37 00:28:24.809 clat (msec): min=211, max=561, avg=305.56, stdev=55.54 00:28:24.809 lat (msec): min=211, max=561, avg=305.58, stdev=55.54 00:28:24.809 clat percentiles (msec): 00:28:24.809 | 1.00th=[ 213], 5.00th=[ 213], 10.00th=[ 236], 20.00th=[ 279], 00:28:24.809 | 30.00th=[ 292], 40.00th=[ 300], 50.00th=[ 305], 60.00th=[ 309], 00:28:24.809 | 70.00th=[ 317], 80.00th=[ 321], 90.00th=[ 368], 95.00th=[ 451], 00:28:24.809 | 99.00th=[ 460], 99.50th=[ 502], 99.90th=[ 558], 99.95th=[ 558], 00:28:24.809 | 99.99th=[ 558] 00:28:24.809 bw ( KiB/s): min= 128, max= 384, per=3.82%, avg=204.80, stdev=74.07, samples=20 00:28:24.809 iops : min= 32, max= 96, avg=51.20, stdev=18.52, samples=20 00:28:24.809 lat (msec) : 250=14.77%, 500=84.47%, 750=0.76% 00:28:24.809 cpu : usr=98.42%, sys=1.06%, ctx=47, majf=0, minf=45 00:28:24.809 IO depths : 1=4.4%, 2=10.4%, 4=24.4%, 8=52.7%, 16=8.1%, 32=0.0%, >=64=0.0% 00:28:24.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.809 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.809 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.809 filename2: (groupid=0, jobs=1): err= 0: pid=1249859: Sat Jul 13 06:23:29 2024 00:28:24.809 read: IOPS=53, BW=215KiB/s (220kB/s)(2176KiB/10130msec) 00:28:24.809 slat (nsec): min=8013, max=95188, avg=27341.17, stdev=15081.69 00:28:24.809 clat (msec): min=95, max=520, avg=297.69, stdev=58.63 00:28:24.809 lat (msec): min=95, max=520, avg=297.72, stdev=58.62 00:28:24.809 clat percentiles (msec): 00:28:24.809 | 1.00th=[ 142], 5.00th=[ 197], 10.00th=[ 213], 20.00th=[ 271], 00:28:24.809 | 30.00th=[ 288], 40.00th=[ 292], 50.00th=[ 300], 60.00th=[ 300], 00:28:24.809 | 70.00th=[ 313], 80.00th=[ 342], 90.00th=[ 351], 95.00th=[ 414], 00:28:24.809 | 99.00th=[ 472], 99.50th=[ 518], 99.90th=[ 518], 99.95th=[ 518], 00:28:24.809 | 99.99th=[ 518] 00:28:24.809 bw ( KiB/s): min= 128, max= 384, per=3.95%, avg=211.20, stdev=72.60, samples=20 00:28:24.809 iops : min= 32, max= 96, avg=52.80, stdev=18.15, samples=20 00:28:24.809 lat (msec) : 100=0.37%, 250=17.28%, 500=81.62%, 750=0.74% 00:28:24.809 cpu : usr=98.48%, sys=1.04%, ctx=41, majf=0, minf=51 00:28:24.809 IO depths : 1=3.1%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.4%, 32=0.0%, >=64=0.0% 00:28:24.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.809 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.809 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.809 filename2: (groupid=0, jobs=1): err= 0: pid=1249860: Sat Jul 13 06:23:29 2024 00:28:24.809 read: IOPS=58, BW=233KiB/s (239kB/s)(2368KiB/10144msec) 00:28:24.809 slat (nsec): min=8291, max=73193, avg=27992.55, stdev=10286.33 00:28:24.809 clat (msec): min=165, max=435, avg=273.92, stdev=40.70 00:28:24.809 lat (msec): min=165, max=435, avg=273.95, stdev=40.70 00:28:24.809 clat percentiles (msec): 00:28:24.809 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 211], 20.00th=[ 245], 00:28:24.809 | 30.00th=[ 259], 40.00th=[ 275], 50.00th=[ 284], 60.00th=[ 292], 00:28:24.809 | 70.00th=[ 300], 80.00th=[ 305], 90.00th=[ 309], 95.00th=[ 321], 00:28:24.809 | 99.00th=[ 347], 99.50th=[ 384], 99.90th=[ 435], 99.95th=[ 435], 00:28:24.809 | 99.99th=[ 435] 00:28:24.809 bw ( KiB/s): min= 128, max= 272, per=4.30%, avg=230.40, stdev=52.79, samples=20 00:28:24.809 iops : min= 32, max= 68, avg=57.60, stdev=13.20, samples=20 00:28:24.809 lat (msec) : 250=22.97%, 500=77.03% 00:28:24.809 cpu : usr=98.43%, sys=1.16%, ctx=22, majf=0, minf=47 00:28:24.809 IO depths : 1=5.1%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:28:24.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.809 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:24.809 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:24.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:24.809 00:28:24.809 Run status group 0 (all jobs): 00:28:24.809 READ: bw=5347KiB/s (5475kB/s), 208KiB/s-245KiB/s (213kB/s-251kB/s), io=53.1MiB (55.7MB), run=10091-10171msec 00:28:24.809 06:23:29 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:24.809 06:23:29 -- target/dif.sh@43 -- # local sub 00:28:24.809 06:23:29 -- target/dif.sh@45 -- # for sub in "$@" 00:28:24.809 06:23:29 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:24.810 06:23:29 -- target/dif.sh@36 -- # local sub_id=0 00:28:24.810 06:23:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:24.810 06:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.810 06:23:29 -- common/autotest_common.sh@10 -- # set +x 00:28:24.810 06:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.810 06:23:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:24.810 06:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.810 06:23:29 -- common/autotest_common.sh@10 -- # set +x 00:28:24.810 06:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.810 06:23:29 -- target/dif.sh@45 -- # for sub in "$@" 00:28:24.810 06:23:29 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:24.810 06:23:29 -- target/dif.sh@36 -- # local sub_id=1 00:28:24.810 06:23:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:24.810 06:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.810 06:23:29 -- common/autotest_common.sh@10 -- # set +x 00:28:24.810 06:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.810 06:23:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:24.810 06:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.810 06:23:29 -- common/autotest_common.sh@10 -- # set +x 00:28:24.810 06:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.810 06:23:29 -- target/dif.sh@45 -- # for sub in "$@" 00:28:24.810 06:23:29 -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:24.810 06:23:29 -- target/dif.sh@36 -- # local sub_id=2 00:28:24.810 06:23:29 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:24.810 06:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.810 06:23:29 -- common/autotest_common.sh@10 -- # set +x 00:28:24.810 06:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.810 06:23:29 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:24.810 06:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.810 06:23:29 -- common/autotest_common.sh@10 -- # set +x 00:28:24.810 06:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.810 06:23:29 -- target/dif.sh@115 -- # NULL_DIF=1 00:28:24.810 06:23:29 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:24.810 06:23:29 -- target/dif.sh@115 -- # numjobs=2 00:28:24.810 06:23:29 -- target/dif.sh@115 -- # iodepth=8 00:28:24.810 06:23:29 -- target/dif.sh@115 -- # runtime=5 00:28:24.810 06:23:29 -- target/dif.sh@115 -- # files=1 00:28:24.810 06:23:29 -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:24.810 06:23:29 -- target/dif.sh@28 -- # local sub 00:28:24.810 06:23:29 -- target/dif.sh@30 -- # for sub in "$@" 00:28:24.810 06:23:29 -- target/dif.sh@31 -- # create_subsystem 0 00:28:24.810 06:23:29 -- target/dif.sh@18 -- # local sub_id=0 00:28:24.810 06:23:29 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:24.810 06:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.810 06:23:29 -- common/autotest_common.sh@10 -- # set +x 00:28:24.810 bdev_null0 00:28:24.810 06:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.810 06:23:29 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:24.810 06:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.810 06:23:29 -- common/autotest_common.sh@10 -- # set +x 00:28:24.810 06:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.810 06:23:29 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:24.810 06:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.810 06:23:29 -- common/autotest_common.sh@10 -- # set +x 00:28:24.810 06:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.810 06:23:29 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:24.810 06:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.810 06:23:29 -- common/autotest_common.sh@10 -- # set +x 00:28:24.810 [2024-07-13 06:23:29.520735] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.810 06:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.810 06:23:29 -- target/dif.sh@30 -- # for sub in "$@" 00:28:24.810 06:23:29 -- target/dif.sh@31 -- # create_subsystem 1 00:28:24.810 06:23:29 -- target/dif.sh@18 -- # local sub_id=1 00:28:24.810 06:23:29 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:24.810 06:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.810 06:23:29 -- common/autotest_common.sh@10 -- # set +x 00:28:24.810 bdev_null1 00:28:24.810 06:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.810 06:23:29 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:24.810 06:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.810 06:23:29 -- common/autotest_common.sh@10 -- # set +x 00:28:24.810 06:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.810 06:23:29 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:24.810 06:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.810 06:23:29 -- common/autotest_common.sh@10 -- # set +x 00:28:24.810 06:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.810 06:23:29 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:24.810 06:23:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.810 06:23:29 -- common/autotest_common.sh@10 -- # set +x 00:28:24.810 06:23:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.810 06:23:29 -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:24.810 06:23:29 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:24.810 06:23:29 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:24.810 06:23:29 -- nvmf/common.sh@520 -- # config=() 00:28:24.810 06:23:29 -- nvmf/common.sh@520 -- # local subsystem config 00:28:24.810 06:23:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:24.810 06:23:29 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.810 06:23:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:24.810 { 00:28:24.810 "params": { 00:28:24.810 "name": "Nvme$subsystem", 00:28:24.810 "trtype": "$TEST_TRANSPORT", 00:28:24.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.810 "adrfam": "ipv4", 00:28:24.810 "trsvcid": "$NVMF_PORT", 00:28:24.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.810 "hdgst": ${hdgst:-false}, 00:28:24.810 "ddgst": ${ddgst:-false} 00:28:24.810 }, 00:28:24.810 "method": "bdev_nvme_attach_controller" 00:28:24.810 } 00:28:24.810 EOF 00:28:24.810 )") 00:28:24.810 06:23:29 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.810 06:23:29 -- target/dif.sh@82 -- # gen_fio_conf 00:28:24.810 06:23:29 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:24.810 06:23:29 -- target/dif.sh@54 -- # local file 00:28:24.810 06:23:29 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:24.810 06:23:29 -- target/dif.sh@56 -- # cat 00:28:24.810 06:23:29 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:24.810 06:23:29 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:24.810 06:23:29 -- common/autotest_common.sh@1320 -- # shift 00:28:24.810 06:23:29 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:24.810 06:23:29 -- nvmf/common.sh@542 -- # cat 00:28:24.810 06:23:29 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.810 06:23:29 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:24.810 06:23:29 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:24.810 06:23:29 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:24.810 06:23:29 -- target/dif.sh@72 -- # (( file <= files )) 00:28:24.810 06:23:29 -- target/dif.sh@73 -- # cat 00:28:24.810 06:23:29 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:24.810 06:23:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:24.810 06:23:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:24.810 { 00:28:24.810 "params": { 00:28:24.810 "name": "Nvme$subsystem", 00:28:24.810 "trtype": "$TEST_TRANSPORT", 00:28:24.810 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.810 "adrfam": "ipv4", 00:28:24.810 "trsvcid": "$NVMF_PORT", 00:28:24.810 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.810 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.810 "hdgst": ${hdgst:-false}, 00:28:24.810 "ddgst": ${ddgst:-false} 00:28:24.810 }, 00:28:24.810 "method": "bdev_nvme_attach_controller" 00:28:24.810 } 00:28:24.810 EOF 00:28:24.810 )") 00:28:24.810 06:23:29 -- nvmf/common.sh@542 -- # cat 00:28:24.810 06:23:29 -- target/dif.sh@72 -- # (( file++ )) 00:28:24.810 06:23:29 -- target/dif.sh@72 -- # (( file <= files )) 00:28:24.810 06:23:29 -- nvmf/common.sh@544 -- # jq . 00:28:24.810 06:23:29 -- nvmf/common.sh@545 -- # IFS=, 00:28:24.810 06:23:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:24.810 "params": { 00:28:24.810 "name": "Nvme0", 00:28:24.810 "trtype": "tcp", 00:28:24.810 "traddr": "10.0.0.2", 00:28:24.810 "adrfam": "ipv4", 00:28:24.810 "trsvcid": "4420", 00:28:24.810 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:24.810 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:24.810 "hdgst": false, 00:28:24.810 "ddgst": false 00:28:24.810 }, 00:28:24.810 "method": "bdev_nvme_attach_controller" 00:28:24.810 },{ 00:28:24.810 "params": { 00:28:24.810 "name": "Nvme1", 00:28:24.810 "trtype": "tcp", 00:28:24.810 "traddr": "10.0.0.2", 00:28:24.810 "adrfam": "ipv4", 00:28:24.810 "trsvcid": "4420", 00:28:24.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:24.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:24.810 "hdgst": false, 00:28:24.810 "ddgst": false 00:28:24.810 }, 00:28:24.810 "method": "bdev_nvme_attach_controller" 00:28:24.810 }' 00:28:24.810 06:23:29 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:24.810 06:23:29 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:24.810 06:23:29 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.810 06:23:29 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:24.810 06:23:29 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:24.810 06:23:29 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:24.810 06:23:29 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:24.810 06:23:29 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:24.810 06:23:29 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:24.811 06:23:29 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.811 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:24.811 ... 00:28:24.811 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:24.811 ... 00:28:24.811 fio-3.35 00:28:24.811 Starting 4 threads 00:28:24.811 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.811 [2024-07-13 06:23:30.359653] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:24.811 [2024-07-13 06:23:30.359736] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:29.023 00:28:29.023 filename0: (groupid=0, jobs=1): err= 0: pid=1251287: Sat Jul 13 06:23:35 2024 00:28:29.023 read: IOPS=1819, BW=14.2MiB/s (14.9MB/s)(71.1MiB/5001msec) 00:28:29.023 slat (nsec): min=6907, max=59285, avg=14133.65, stdev=7768.93 00:28:29.023 clat (usec): min=873, max=47987, avg=4352.81, stdev=1515.16 00:28:29.023 lat (usec): min=886, max=48014, avg=4366.95, stdev=1515.31 00:28:29.023 clat percentiles (usec): 00:28:29.023 | 1.00th=[ 2638], 5.00th=[ 3261], 10.00th=[ 3523], 20.00th=[ 3752], 00:28:29.023 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4113], 60.00th=[ 4293], 00:28:29.023 | 70.00th=[ 4686], 80.00th=[ 5014], 90.00th=[ 5276], 95.00th=[ 5604], 00:28:29.023 | 99.00th=[ 6915], 99.50th=[ 7308], 99.90th=[ 8586], 99.95th=[47973], 00:28:29.023 | 99.99th=[47973] 00:28:29.023 bw ( KiB/s): min=12496, max=16400, per=24.54%, avg=14406.44, stdev=1519.98, samples=9 00:28:29.023 iops : min= 1562, max= 2050, avg=1800.78, stdev=190.02, samples=9 00:28:29.023 lat (usec) : 1000=0.02% 00:28:29.023 lat (msec) : 2=0.25%, 4=38.89%, 10=60.75%, 50=0.09% 00:28:29.023 cpu : usr=94.36%, sys=5.10%, ctx=10, majf=0, minf=9 00:28:29.023 IO depths : 1=0.1%, 2=6.9%, 4=64.9%, 8=28.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:29.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.023 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.023 issued rwts: total=9097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.023 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:29.023 filename0: (groupid=0, jobs=1): err= 0: pid=1251288: Sat Jul 13 06:23:35 2024 00:28:29.023 read: IOPS=1824, BW=14.2MiB/s (14.9MB/s)(71.3MiB/5002msec) 00:28:29.023 slat (nsec): min=6958, max=87858, avg=13965.20, stdev=7851.97 00:28:29.023 clat (usec): min=788, max=9508, avg=4340.53, stdev=900.23 00:28:29.023 lat (usec): min=804, max=9526, avg=4354.50, stdev=900.11 00:28:29.023 clat percentiles (usec): 00:28:29.023 | 1.00th=[ 2671], 5.00th=[ 3261], 10.00th=[ 3490], 20.00th=[ 3720], 00:28:29.023 | 30.00th=[ 3884], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4228], 00:28:29.023 | 70.00th=[ 4686], 80.00th=[ 5014], 90.00th=[ 5407], 95.00th=[ 5866], 00:28:29.023 | 99.00th=[ 7570], 99.50th=[ 7963], 99.90th=[ 8586], 99.95th=[ 9241], 00:28:29.023 | 99.99th=[ 9503] 00:28:29.023 bw ( KiB/s): min=12160, max=15984, per=24.54%, avg=14405.33, stdev=1598.50, samples=9 00:28:29.023 iops : min= 1520, max= 1998, avg=1800.67, stdev=199.81, samples=9 00:28:29.023 lat (usec) : 1000=0.03% 00:28:29.023 lat (msec) : 2=0.49%, 4=41.55%, 10=57.92% 00:28:29.023 cpu : usr=95.18%, sys=4.32%, ctx=12, majf=0, minf=9 00:28:29.023 IO depths : 1=0.1%, 2=7.0%, 4=65.0%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:29.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.023 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.023 issued rwts: total=9124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.023 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:29.023 filename1: (groupid=0, jobs=1): err= 0: pid=1251289: Sat Jul 13 06:23:35 2024 00:28:29.023 read: IOPS=1821, BW=14.2MiB/s (14.9MB/s)(71.2MiB/5003msec) 00:28:29.023 slat (usec): min=4, max=232, avg=13.63, stdev= 8.27 00:28:29.023 clat (usec): min=789, max=9677, avg=4347.42, stdev=871.00 00:28:29.023 lat (usec): min=806, max=9690, avg=4361.05, stdev=871.35 00:28:29.023 clat percentiles (usec): 00:28:29.023 | 1.00th=[ 2606], 5.00th=[ 3326], 10.00th=[ 3523], 20.00th=[ 3752], 00:28:29.023 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4113], 60.00th=[ 4293], 00:28:29.023 | 70.00th=[ 4686], 80.00th=[ 5014], 90.00th=[ 5342], 95.00th=[ 5800], 00:28:29.023 | 99.00th=[ 7504], 99.50th=[ 7963], 99.90th=[ 9241], 99.95th=[ 9634], 00:28:29.023 | 99.99th=[ 9634] 00:28:29.023 bw ( KiB/s): min=12256, max=16160, per=24.82%, avg=14569.60, stdev=1601.03, samples=10 00:28:29.023 iops : min= 1532, max= 2020, avg=1821.20, stdev=200.13, samples=10 00:28:29.023 lat (usec) : 1000=0.04% 00:28:29.023 lat (msec) : 2=0.47%, 4=39.30%, 10=60.18% 00:28:29.023 cpu : usr=93.28%, sys=5.28%, ctx=18, majf=0, minf=9 00:28:29.023 IO depths : 1=0.2%, 2=7.8%, 4=64.2%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:29.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.023 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.023 issued rwts: total=9111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.023 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:29.023 filename1: (groupid=0, jobs=1): err= 0: pid=1251290: Sat Jul 13 06:23:35 2024 00:28:29.023 read: IOPS=1876, BW=14.7MiB/s (15.4MB/s)(73.3MiB/5004msec) 00:28:29.023 slat (nsec): min=4190, max=59868, avg=16584.49, stdev=7575.87 00:28:29.023 clat (usec): min=964, max=11162, avg=4209.76, stdev=804.02 00:28:29.023 lat (usec): min=973, max=11177, avg=4226.34, stdev=804.58 00:28:29.023 clat percentiles (usec): 00:28:29.023 | 1.00th=[ 2507], 5.00th=[ 3163], 10.00th=[ 3392], 20.00th=[ 3654], 00:28:29.023 | 30.00th=[ 3818], 40.00th=[ 3949], 50.00th=[ 4047], 60.00th=[ 4178], 00:28:29.023 | 70.00th=[ 4424], 80.00th=[ 4883], 90.00th=[ 5211], 95.00th=[ 5473], 00:28:29.023 | 99.00th=[ 6587], 99.50th=[ 7308], 99.90th=[ 9503], 99.95th=[11076], 00:28:29.023 | 99.99th=[11207] 00:28:29.023 bw ( KiB/s): min=12752, max=16400, per=25.57%, avg=15008.00, stdev=1515.62, samples=10 00:28:29.023 iops : min= 1594, max= 2050, avg=1876.00, stdev=189.45, samples=10 00:28:29.023 lat (usec) : 1000=0.02% 00:28:29.023 lat (msec) : 2=0.31%, 4=45.82%, 10=53.76%, 20=0.09% 00:28:29.023 cpu : usr=94.26%, sys=4.60%, ctx=190, majf=0, minf=0 00:28:29.023 IO depths : 1=0.2%, 2=7.2%, 4=65.3%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:29.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.023 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:29.023 issued rwts: total=9388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:29.023 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:29.023 00:28:29.023 Run status group 0 (all jobs): 00:28:29.023 READ: bw=57.3MiB/s (60.1MB/s), 14.2MiB/s-14.7MiB/s (14.9MB/s-15.4MB/s), io=287MiB (301MB), run=5001-5004msec 00:28:29.282 06:23:35 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:29.282 06:23:35 -- target/dif.sh@43 -- # local sub 00:28:29.282 06:23:35 -- target/dif.sh@45 -- # for sub in "$@" 00:28:29.282 06:23:35 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:29.282 06:23:35 -- target/dif.sh@36 -- # local sub_id=0 00:28:29.282 06:23:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:29.282 06:23:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.282 06:23:35 -- common/autotest_common.sh@10 -- # set +x 00:28:29.282 06:23:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.282 06:23:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:29.282 06:23:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.282 06:23:35 -- common/autotest_common.sh@10 -- # set +x 00:28:29.282 06:23:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.282 06:23:35 -- target/dif.sh@45 -- # for sub in "$@" 00:28:29.282 06:23:35 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:29.282 06:23:35 -- target/dif.sh@36 -- # local sub_id=1 00:28:29.282 06:23:35 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:29.282 06:23:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.282 06:23:35 -- common/autotest_common.sh@10 -- # set +x 00:28:29.282 06:23:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.282 06:23:35 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:29.282 06:23:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.282 06:23:35 -- common/autotest_common.sh@10 -- # set +x 00:28:29.282 06:23:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.282 00:28:29.282 real 0m24.142s 00:28:29.282 user 4m36.808s 00:28:29.282 sys 0m5.433s 00:28:29.282 06:23:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:29.282 06:23:35 -- common/autotest_common.sh@10 -- # set +x 00:28:29.282 ************************************ 00:28:29.282 END TEST fio_dif_rand_params 00:28:29.282 ************************************ 00:28:29.282 06:23:35 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:29.282 06:23:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:29.282 06:23:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:29.282 06:23:35 -- common/autotest_common.sh@10 -- # set +x 00:28:29.282 ************************************ 00:28:29.282 START TEST fio_dif_digest 00:28:29.282 ************************************ 00:28:29.282 06:23:35 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:28:29.282 06:23:35 -- target/dif.sh@123 -- # local NULL_DIF 00:28:29.282 06:23:35 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:29.282 06:23:35 -- target/dif.sh@125 -- # local hdgst ddgst 00:28:29.282 06:23:35 -- target/dif.sh@127 -- # NULL_DIF=3 00:28:29.282 06:23:35 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:29.282 06:23:35 -- target/dif.sh@127 -- # numjobs=3 00:28:29.282 06:23:35 -- target/dif.sh@127 -- # iodepth=3 00:28:29.282 06:23:35 -- target/dif.sh@127 -- # runtime=10 00:28:29.282 06:23:35 -- target/dif.sh@128 -- # hdgst=true 00:28:29.282 06:23:35 -- target/dif.sh@128 -- # ddgst=true 00:28:29.282 06:23:35 -- target/dif.sh@130 -- # create_subsystems 0 00:28:29.282 06:23:35 -- target/dif.sh@28 -- # local sub 00:28:29.282 06:23:35 -- target/dif.sh@30 -- # for sub in "$@" 00:28:29.282 06:23:35 -- target/dif.sh@31 -- # create_subsystem 0 00:28:29.282 06:23:35 -- target/dif.sh@18 -- # local sub_id=0 00:28:29.282 06:23:35 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:29.282 06:23:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.282 06:23:35 -- common/autotest_common.sh@10 -- # set +x 00:28:29.282 bdev_null0 00:28:29.282 06:23:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.282 06:23:35 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:29.282 06:23:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.282 06:23:35 -- common/autotest_common.sh@10 -- # set +x 00:28:29.540 06:23:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.540 06:23:35 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:29.540 06:23:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.540 06:23:35 -- common/autotest_common.sh@10 -- # set +x 00:28:29.540 06:23:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.540 06:23:35 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:29.540 06:23:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:29.540 06:23:35 -- common/autotest_common.sh@10 -- # set +x 00:28:29.540 [2024-07-13 06:23:35.805494] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:29.540 06:23:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:29.540 06:23:35 -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:29.540 06:23:35 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:29.540 06:23:35 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:29.540 06:23:35 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:29.540 06:23:35 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:29.540 06:23:35 -- nvmf/common.sh@520 -- # config=() 00:28:29.540 06:23:35 -- target/dif.sh@82 -- # gen_fio_conf 00:28:29.540 06:23:35 -- nvmf/common.sh@520 -- # local subsystem config 00:28:29.540 06:23:35 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:29.540 06:23:35 -- target/dif.sh@54 -- # local file 00:28:29.540 06:23:35 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:29.540 06:23:35 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:29.540 06:23:35 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:29.540 06:23:35 -- target/dif.sh@56 -- # cat 00:28:29.540 06:23:35 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:29.540 { 00:28:29.540 "params": { 00:28:29.540 "name": "Nvme$subsystem", 00:28:29.540 "trtype": "$TEST_TRANSPORT", 00:28:29.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:29.540 "adrfam": "ipv4", 00:28:29.540 "trsvcid": "$NVMF_PORT", 00:28:29.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:29.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:29.540 "hdgst": ${hdgst:-false}, 00:28:29.540 "ddgst": ${ddgst:-false} 00:28:29.540 }, 00:28:29.540 "method": "bdev_nvme_attach_controller" 00:28:29.540 } 00:28:29.540 EOF 00:28:29.540 )") 00:28:29.540 06:23:35 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:29.540 06:23:35 -- common/autotest_common.sh@1320 -- # shift 00:28:29.540 06:23:35 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:29.540 06:23:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:29.540 06:23:35 -- nvmf/common.sh@542 -- # cat 00:28:29.540 06:23:35 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:29.540 06:23:35 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:29.540 06:23:35 -- target/dif.sh@72 -- # (( file <= files )) 00:28:29.540 06:23:35 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:29.540 06:23:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:29.540 06:23:35 -- nvmf/common.sh@544 -- # jq . 00:28:29.540 06:23:35 -- nvmf/common.sh@545 -- # IFS=, 00:28:29.540 06:23:35 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:29.540 "params": { 00:28:29.540 "name": "Nvme0", 00:28:29.540 "trtype": "tcp", 00:28:29.540 "traddr": "10.0.0.2", 00:28:29.540 "adrfam": "ipv4", 00:28:29.540 "trsvcid": "4420", 00:28:29.540 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:29.540 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:29.540 "hdgst": true, 00:28:29.540 "ddgst": true 00:28:29.540 }, 00:28:29.540 "method": "bdev_nvme_attach_controller" 00:28:29.540 }' 00:28:29.540 06:23:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:29.540 06:23:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:29.540 06:23:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:29.540 06:23:35 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:29.540 06:23:35 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:29.540 06:23:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:29.540 06:23:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:29.540 06:23:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:29.540 06:23:35 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:29.540 06:23:35 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:29.798 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:29.798 ... 00:28:29.798 fio-3.35 00:28:29.798 Starting 3 threads 00:28:29.798 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.056 [2024-07-13 06:23:36.463402] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:30.056 [2024-07-13 06:23:36.463479] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:42.250 00:28:42.250 filename0: (groupid=0, jobs=1): err= 0: pid=1252192: Sat Jul 13 06:23:46 2024 00:28:42.250 read: IOPS=207, BW=26.0MiB/s (27.2MB/s)(261MiB/10048msec) 00:28:42.250 slat (nsec): min=6108, max=41632, avg=14173.18, stdev=3888.25 00:28:42.250 clat (usec): min=8511, max=55840, avg=14405.95, stdev=2357.67 00:28:42.250 lat (usec): min=8523, max=55853, avg=14420.12, stdev=2357.46 00:28:42.250 clat percentiles (usec): 00:28:42.250 | 1.00th=[10290], 5.00th=[12256], 10.00th=[12780], 20.00th=[13435], 00:28:42.250 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14353], 60.00th=[14615], 00:28:42.250 | 70.00th=[15008], 80.00th=[15401], 90.00th=[15926], 95.00th=[16319], 00:28:42.250 | 99.00th=[17433], 99.50th=[17695], 99.90th=[54789], 99.95th=[55837], 00:28:42.250 | 99.99th=[55837] 00:28:42.250 bw ( KiB/s): min=23552, max=28416, per=33.90%, avg=26675.20, stdev=1247.52, samples=20 00:28:42.250 iops : min= 184, max= 222, avg=208.40, stdev= 9.75, samples=20 00:28:42.250 lat (msec) : 10=0.81%, 20=98.95%, 100=0.24% 00:28:42.250 cpu : usr=91.73%, sys=7.79%, ctx=16, majf=0, minf=99 00:28:42.250 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:42.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.250 issued rwts: total=2087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.250 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:42.250 filename0: (groupid=0, jobs=1): err= 0: pid=1252193: Sat Jul 13 06:23:46 2024 00:28:42.250 read: IOPS=203, BW=25.4MiB/s (26.6MB/s)(255MiB/10049msec) 00:28:42.250 slat (nsec): min=6163, max=73517, avg=13882.39, stdev=4042.48 00:28:42.250 clat (usec): min=8512, max=55737, avg=14739.28, stdev=2804.74 00:28:42.250 lat (usec): min=8524, max=55750, avg=14753.16, stdev=2804.73 00:28:42.250 clat percentiles (usec): 00:28:42.250 | 1.00th=[10945], 5.00th=[12518], 10.00th=[13042], 20.00th=[13566], 00:28:42.250 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:28:42.250 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:28:42.250 | 99.00th=[17695], 99.50th=[20841], 99.90th=[55313], 99.95th=[55837], 00:28:42.250 | 99.99th=[55837] 00:28:42.250 bw ( KiB/s): min=22784, max=28160, per=33.13%, avg=26073.60, stdev=1345.11, samples=20 00:28:42.250 iops : min= 178, max= 220, avg=203.70, stdev=10.51, samples=20 00:28:42.250 lat (msec) : 10=0.25%, 20=99.22%, 50=0.15%, 100=0.39% 00:28:42.250 cpu : usr=91.76%, sys=7.77%, ctx=22, majf=0, minf=185 00:28:42.250 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:42.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.250 issued rwts: total=2040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.250 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:42.250 filename0: (groupid=0, jobs=1): err= 0: pid=1252194: Sat Jul 13 06:23:46 2024 00:28:42.250 read: IOPS=204, BW=25.5MiB/s (26.8MB/s)(256MiB/10049msec) 00:28:42.250 slat (nsec): min=6117, max=41427, avg=15423.86, stdev=4883.22 00:28:42.250 clat (usec): min=8340, max=55388, avg=14656.28, stdev=2290.90 00:28:42.250 lat (usec): min=8352, max=55407, avg=14671.70, stdev=2290.56 00:28:42.250 clat percentiles (usec): 00:28:42.250 | 1.00th=[10814], 5.00th=[12518], 10.00th=[13042], 20.00th=[13566], 00:28:42.250 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14615], 60.00th=[14877], 00:28:42.250 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16188], 95.00th=[16581], 00:28:42.250 | 99.00th=[17695], 99.50th=[18220], 99.90th=[53740], 99.95th=[54789], 00:28:42.250 | 99.99th=[55313] 00:28:42.250 bw ( KiB/s): min=24832, max=27904, per=33.33%, avg=26227.20, stdev=948.73, samples=20 00:28:42.250 iops : min= 194, max= 218, avg=204.90, stdev= 7.41, samples=20 00:28:42.250 lat (msec) : 10=0.49%, 20=99.22%, 50=0.10%, 100=0.20% 00:28:42.250 cpu : usr=91.78%, sys=7.73%, ctx=25, majf=0, minf=181 00:28:42.250 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:42.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:42.250 issued rwts: total=2051,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:42.250 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:42.250 00:28:42.250 Run status group 0 (all jobs): 00:28:42.250 READ: bw=76.8MiB/s (80.6MB/s), 25.4MiB/s-26.0MiB/s (26.6MB/s-27.2MB/s), io=772MiB (810MB), run=10048-10049msec 00:28:42.250 06:23:46 -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:42.250 06:23:46 -- target/dif.sh@43 -- # local sub 00:28:42.250 06:23:46 -- target/dif.sh@45 -- # for sub in "$@" 00:28:42.250 06:23:46 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:42.250 06:23:46 -- target/dif.sh@36 -- # local sub_id=0 00:28:42.250 06:23:46 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:42.250 06:23:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.250 06:23:46 -- common/autotest_common.sh@10 -- # set +x 00:28:42.250 06:23:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.250 06:23:46 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:42.250 06:23:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.250 06:23:46 -- common/autotest_common.sh@10 -- # set +x 00:28:42.250 06:23:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.250 00:28:42.250 real 0m11.205s 00:28:42.250 user 0m28.798s 00:28:42.250 sys 0m2.603s 00:28:42.250 06:23:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:42.250 06:23:46 -- common/autotest_common.sh@10 -- # set +x 00:28:42.250 ************************************ 00:28:42.250 END TEST fio_dif_digest 00:28:42.250 ************************************ 00:28:42.250 06:23:47 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:42.250 06:23:47 -- target/dif.sh@147 -- # nvmftestfini 00:28:42.251 06:23:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:42.251 06:23:47 -- nvmf/common.sh@116 -- # sync 00:28:42.251 06:23:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:42.251 06:23:47 -- nvmf/common.sh@119 -- # set +e 00:28:42.251 06:23:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:42.251 06:23:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:42.251 rmmod nvme_tcp 00:28:42.251 rmmod nvme_fabrics 00:28:42.251 rmmod nvme_keyring 00:28:42.251 06:23:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:42.251 06:23:47 -- nvmf/common.sh@123 -- # set -e 00:28:42.251 06:23:47 -- nvmf/common.sh@124 -- # return 0 00:28:42.251 06:23:47 -- nvmf/common.sh@477 -- # '[' -n 1245826 ']' 00:28:42.251 06:23:47 -- nvmf/common.sh@478 -- # killprocess 1245826 00:28:42.251 06:23:47 -- common/autotest_common.sh@926 -- # '[' -z 1245826 ']' 00:28:42.251 06:23:47 -- common/autotest_common.sh@930 -- # kill -0 1245826 00:28:42.251 06:23:47 -- common/autotest_common.sh@931 -- # uname 00:28:42.251 06:23:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:42.251 06:23:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1245826 00:28:42.251 06:23:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:42.251 06:23:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:42.251 06:23:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1245826' 00:28:42.251 killing process with pid 1245826 00:28:42.251 06:23:47 -- common/autotest_common.sh@945 -- # kill 1245826 00:28:42.251 06:23:47 -- common/autotest_common.sh@950 -- # wait 1245826 00:28:42.251 06:23:47 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:28:42.251 06:23:47 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:42.251 Waiting for block devices as requested 00:28:42.251 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:28:42.251 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:42.509 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:42.509 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:42.509 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:42.509 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:42.768 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:42.768 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:42.768 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:42.768 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:43.026 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:43.026 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:43.026 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:43.026 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:43.284 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:43.284 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:43.284 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:43.542 06:23:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:43.542 06:23:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:43.542 06:23:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:43.542 06:23:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:43.543 06:23:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.543 06:23:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:43.543 06:23:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.444 06:23:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:45.444 00:28:45.444 real 1m7.050s 00:28:45.444 user 6m33.585s 00:28:45.444 sys 0m16.974s 00:28:45.444 06:23:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:45.444 06:23:51 -- common/autotest_common.sh@10 -- # set +x 00:28:45.444 ************************************ 00:28:45.444 END TEST nvmf_dif 00:28:45.444 ************************************ 00:28:45.444 06:23:51 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:45.444 06:23:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:45.444 06:23:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:45.444 06:23:51 -- common/autotest_common.sh@10 -- # set +x 00:28:45.444 ************************************ 00:28:45.444 START TEST nvmf_abort_qd_sizes 00:28:45.444 ************************************ 00:28:45.444 06:23:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:45.444 * Looking for test storage... 00:28:45.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:45.444 06:23:51 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:45.444 06:23:51 -- nvmf/common.sh@7 -- # uname -s 00:28:45.445 06:23:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:45.445 06:23:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:45.445 06:23:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:45.445 06:23:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:45.445 06:23:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:45.704 06:23:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:45.704 06:23:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:45.704 06:23:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:45.704 06:23:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:45.704 06:23:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:45.704 06:23:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:45.704 06:23:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:45.704 06:23:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:45.704 06:23:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:45.704 06:23:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:45.704 06:23:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:45.704 06:23:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:45.704 06:23:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:45.704 06:23:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:45.704 06:23:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.704 06:23:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.704 06:23:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.704 06:23:51 -- paths/export.sh@5 -- # export PATH 00:28:45.704 06:23:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.704 06:23:51 -- nvmf/common.sh@46 -- # : 0 00:28:45.704 06:23:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:45.704 06:23:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:45.704 06:23:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:45.704 06:23:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:45.704 06:23:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:45.704 06:23:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:45.704 06:23:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:45.704 06:23:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:45.704 06:23:51 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:28:45.704 06:23:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:45.704 06:23:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:45.704 06:23:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:45.704 06:23:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:45.704 06:23:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:45.704 06:23:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.704 06:23:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:45.704 06:23:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:45.704 06:23:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:45.704 06:23:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:45.704 06:23:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:45.704 06:23:51 -- common/autotest_common.sh@10 -- # set +x 00:28:47.611 06:23:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:47.611 06:23:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:47.611 06:23:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:47.611 06:23:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:47.611 06:23:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:47.611 06:23:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:47.611 06:23:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:47.611 06:23:53 -- nvmf/common.sh@294 -- # net_devs=() 00:28:47.611 06:23:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:47.611 06:23:53 -- nvmf/common.sh@295 -- # e810=() 00:28:47.611 06:23:53 -- nvmf/common.sh@295 -- # local -ga e810 00:28:47.611 06:23:53 -- nvmf/common.sh@296 -- # x722=() 00:28:47.611 06:23:53 -- nvmf/common.sh@296 -- # local -ga x722 00:28:47.611 06:23:53 -- nvmf/common.sh@297 -- # mlx=() 00:28:47.611 06:23:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:47.611 06:23:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:47.611 06:23:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:47.611 06:23:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:47.611 06:23:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:47.611 06:23:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:47.611 06:23:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:47.611 06:23:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:47.611 06:23:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:47.611 06:23:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:47.611 06:23:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:47.611 06:23:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:47.611 06:23:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:47.611 06:23:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:47.611 06:23:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:47.611 06:23:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:47.611 06:23:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:47.611 06:23:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:47.611 06:23:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:47.611 06:23:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:47.611 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:47.611 06:23:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:47.611 06:23:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:47.611 06:23:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.611 06:23:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.611 06:23:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:47.611 06:23:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:47.611 06:23:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:47.611 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:47.611 06:23:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:47.611 06:23:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:47.611 06:23:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.611 06:23:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.611 06:23:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:47.611 06:23:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:47.611 06:23:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:47.611 06:23:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:47.611 06:23:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:47.611 06:23:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.611 06:23:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:47.611 06:23:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.611 06:23:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:47.611 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:47.611 06:23:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.611 06:23:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:47.611 06:23:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.611 06:23:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:47.611 06:23:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.611 06:23:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:47.611 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:47.611 06:23:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.611 06:23:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:47.611 06:23:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:47.611 06:23:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:47.611 06:23:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:47.611 06:23:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:47.611 06:23:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:47.611 06:23:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:47.611 06:23:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:47.611 06:23:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:47.611 06:23:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:47.611 06:23:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:47.611 06:23:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:47.611 06:23:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:47.611 06:23:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:47.611 06:23:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:47.611 06:23:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:47.611 06:23:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:47.611 06:23:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:47.611 06:23:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:47.611 06:23:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:47.611 06:23:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:47.611 06:23:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:47.611 06:23:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:47.611 06:23:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:47.611 06:23:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:47.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:47.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:28:47.611 00:28:47.611 --- 10.0.0.2 ping statistics --- 00:28:47.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.611 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:28:47.612 06:23:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:47.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:47.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:28:47.612 00:28:47.612 --- 10.0.0.1 ping statistics --- 00:28:47.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.612 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:28:47.612 06:23:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:47.612 06:23:54 -- nvmf/common.sh@410 -- # return 0 00:28:47.612 06:23:54 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:28:47.612 06:23:54 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:48.982 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:48.982 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:48.982 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:48.982 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:48.982 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:48.982 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:48.982 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:48.982 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:48.982 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:48.982 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:48.982 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:48.982 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:48.982 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:48.982 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:48.982 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:48.982 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:49.919 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:28:49.919 06:23:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:49.919 06:23:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:49.919 06:23:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:49.919 06:23:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:49.919 06:23:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:49.919 06:23:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:49.919 06:23:56 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:28:49.919 06:23:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:49.920 06:23:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:49.920 06:23:56 -- common/autotest_common.sh@10 -- # set +x 00:28:49.920 06:23:56 -- nvmf/common.sh@469 -- # nvmfpid=1257089 00:28:49.920 06:23:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:49.920 06:23:56 -- nvmf/common.sh@470 -- # waitforlisten 1257089 00:28:49.920 06:23:56 -- common/autotest_common.sh@819 -- # '[' -z 1257089 ']' 00:28:49.920 06:23:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.920 06:23:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:49.920 06:23:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.920 06:23:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:49.920 06:23:56 -- common/autotest_common.sh@10 -- # set +x 00:28:50.177 [2024-07-13 06:23:56.438473] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:50.177 [2024-07-13 06:23:56.438565] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.177 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.177 [2024-07-13 06:23:56.507979] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:50.177 [2024-07-13 06:23:56.625203] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:50.177 [2024-07-13 06:23:56.625380] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.177 [2024-07-13 06:23:56.625401] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.177 [2024-07-13 06:23:56.625417] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.177 [2024-07-13 06:23:56.625533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.177 [2024-07-13 06:23:56.625611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.177 [2024-07-13 06:23:56.625638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.177 [2024-07-13 06:23:56.625641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.107 06:23:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:51.107 06:23:57 -- common/autotest_common.sh@852 -- # return 0 00:28:51.107 06:23:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:51.107 06:23:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:51.107 06:23:57 -- common/autotest_common.sh@10 -- # set +x 00:28:51.107 06:23:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.107 06:23:57 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:51.107 06:23:57 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:28:51.107 06:23:57 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:28:51.107 06:23:57 -- scripts/common.sh@311 -- # local bdf bdfs 00:28:51.107 06:23:57 -- scripts/common.sh@312 -- # local nvmes 00:28:51.107 06:23:57 -- scripts/common.sh@314 -- # [[ -n 0000:88:00.0 ]] 00:28:51.107 06:23:57 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:51.107 06:23:57 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:28:51.107 06:23:57 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:28:51.107 06:23:57 -- scripts/common.sh@322 -- # uname -s 00:28:51.107 06:23:57 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:28:51.107 06:23:57 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:28:51.107 06:23:57 -- scripts/common.sh@327 -- # (( 1 )) 00:28:51.107 06:23:57 -- scripts/common.sh@328 -- # printf '%s\n' 0000:88:00.0 00:28:51.107 06:23:57 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:28:51.107 06:23:57 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:88:00.0 00:28:51.107 06:23:57 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:28:51.107 06:23:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:51.107 06:23:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:51.107 06:23:57 -- common/autotest_common.sh@10 -- # set +x 00:28:51.107 ************************************ 00:28:51.107 START TEST spdk_target_abort 00:28:51.107 ************************************ 00:28:51.107 06:23:57 -- common/autotest_common.sh@1104 -- # spdk_target 00:28:51.107 06:23:57 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:51.107 06:23:57 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:28:51.107 06:23:57 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:28:51.107 06:23:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:51.107 06:23:57 -- common/autotest_common.sh@10 -- # set +x 00:28:54.384 spdk_targetn1 00:28:54.385 06:24:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:54.385 06:24:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:54.385 06:24:00 -- common/autotest_common.sh@10 -- # set +x 00:28:54.385 [2024-07-13 06:24:00.227971] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.385 06:24:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:28:54.385 06:24:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:54.385 06:24:00 -- common/autotest_common.sh@10 -- # set +x 00:28:54.385 06:24:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:28:54.385 06:24:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:54.385 06:24:00 -- common/autotest_common.sh@10 -- # set +x 00:28:54.385 06:24:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:28:54.385 06:24:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:54.385 06:24:00 -- common/autotest_common.sh@10 -- # set +x 00:28:54.385 [2024-07-13 06:24:00.260255] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.385 06:24:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:54.385 06:24:00 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:28:54.385 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.905 Initializing NVMe Controllers 00:28:56.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:28:56.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:28:56.905 Initialization complete. Launching workers. 00:28:56.905 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 9506, failed: 0 00:28:56.905 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1273, failed to submit 8233 00:28:56.905 success 732, unsuccess 541, failed 0 00:28:56.905 06:24:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:56.905 06:24:03 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:28:57.162 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.435 [2024-07-13 06:24:06.538924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a270 is same with the state(5) to be set 00:29:00.435 [2024-07-13 06:24:06.538983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a270 is same with the state(5) to be set 00:29:00.435 [2024-07-13 06:24:06.538999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a270 is same with the state(5) to be set 00:29:00.435 [2024-07-13 06:24:06.539011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a270 is same with the state(5) to be set 00:29:00.435 [2024-07-13 06:24:06.539023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a270 is same with the state(5) to be set 00:29:00.435 [2024-07-13 06:24:06.539035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a270 is same with the state(5) to be set 00:29:00.435 [2024-07-13 06:24:06.539056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a270 is same with the state(5) to be set 00:29:00.435 [2024-07-13 06:24:06.539069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a270 is same with the state(5) to be set 00:29:00.435 [2024-07-13 06:24:06.539081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a270 is same with the state(5) to be set 00:29:00.435 [2024-07-13 06:24:06.539093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a270 is same with the state(5) to be set 00:29:00.435 [2024-07-13 06:24:06.539127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a270 is same with the state(5) to be set 00:29:00.435 [2024-07-13 06:24:06.539138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x65a270 is same with the state(5) to be set 00:29:00.435 Initializing NVMe Controllers 00:29:00.435 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:29:00.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:29:00.435 Initialization complete. Launching workers. 00:29:00.435 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8607, failed: 0 00:29:00.435 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1267, failed to submit 7340 00:29:00.435 success 333, unsuccess 934, failed 0 00:29:00.435 06:24:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:00.435 06:24:06 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:29:00.435 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.714 Initializing NVMe Controllers 00:29:03.714 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:29:03.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:29:03.714 Initialization complete. Launching workers. 00:29:03.714 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 32384, failed: 0 00:29:03.714 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2783, failed to submit 29601 00:29:03.714 success 527, unsuccess 2256, failed 0 00:29:03.714 06:24:09 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:29:03.714 06:24:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:03.714 06:24:09 -- common/autotest_common.sh@10 -- # set +x 00:29:03.714 06:24:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:03.714 06:24:09 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:03.714 06:24:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:03.714 06:24:09 -- common/autotest_common.sh@10 -- # set +x 00:29:05.086 06:24:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:05.086 06:24:11 -- target/abort_qd_sizes.sh@62 -- # killprocess 1257089 00:29:05.086 06:24:11 -- common/autotest_common.sh@926 -- # '[' -z 1257089 ']' 00:29:05.086 06:24:11 -- common/autotest_common.sh@930 -- # kill -0 1257089 00:29:05.086 06:24:11 -- common/autotest_common.sh@931 -- # uname 00:29:05.086 06:24:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:05.086 06:24:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1257089 00:29:05.086 06:24:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:05.086 06:24:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:05.086 06:24:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1257089' 00:29:05.086 killing process with pid 1257089 00:29:05.086 06:24:11 -- common/autotest_common.sh@945 -- # kill 1257089 00:29:05.086 06:24:11 -- common/autotest_common.sh@950 -- # wait 1257089 00:29:05.086 00:29:05.086 real 0m14.071s 00:29:05.086 user 0m55.615s 00:29:05.086 sys 0m2.517s 00:29:05.086 06:24:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:05.086 06:24:11 -- common/autotest_common.sh@10 -- # set +x 00:29:05.086 ************************************ 00:29:05.086 END TEST spdk_target_abort 00:29:05.086 ************************************ 00:29:05.086 06:24:11 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:29:05.086 06:24:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:05.086 06:24:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:05.086 06:24:11 -- common/autotest_common.sh@10 -- # set +x 00:29:05.086 ************************************ 00:29:05.086 START TEST kernel_target_abort 00:29:05.086 ************************************ 00:29:05.086 06:24:11 -- common/autotest_common.sh@1104 -- # kernel_target 00:29:05.086 06:24:11 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:29:05.086 06:24:11 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:29:05.086 06:24:11 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:29:05.086 06:24:11 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:29:05.086 06:24:11 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:29:05.086 06:24:11 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:29:05.086 06:24:11 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:05.086 06:24:11 -- nvmf/common.sh@627 -- # local block nvme 00:29:05.086 06:24:11 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:29:05.086 06:24:11 -- nvmf/common.sh@630 -- # modprobe nvmet 00:29:05.086 06:24:11 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:05.086 06:24:11 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:06.462 Waiting for block devices as requested 00:29:06.462 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:29:06.462 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:06.462 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:06.720 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:06.720 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:06.720 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:06.720 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:06.977 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:06.977 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:06.977 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:07.235 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:07.235 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:07.235 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:07.235 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:07.493 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:07.493 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:07.493 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:07.754 06:24:14 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:29:07.754 06:24:14 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:07.754 06:24:14 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:29:07.754 06:24:14 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:29:07.754 06:24:14 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:07.754 No valid GPT data, bailing 00:29:07.754 06:24:14 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:07.754 06:24:14 -- scripts/common.sh@393 -- # pt= 00:29:07.754 06:24:14 -- scripts/common.sh@394 -- # return 1 00:29:07.754 06:24:14 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:29:07.754 06:24:14 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:29:07.754 06:24:14 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:29:07.754 06:24:14 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:29:07.754 06:24:14 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:07.754 06:24:14 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:29:07.754 06:24:14 -- nvmf/common.sh@654 -- # echo 1 00:29:07.754 06:24:14 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:29:07.754 06:24:14 -- nvmf/common.sh@656 -- # echo 1 00:29:07.754 06:24:14 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:29:07.754 06:24:14 -- nvmf/common.sh@663 -- # echo tcp 00:29:07.754 06:24:14 -- nvmf/common.sh@664 -- # echo 4420 00:29:07.754 06:24:14 -- nvmf/common.sh@665 -- # echo ipv4 00:29:07.754 06:24:14 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:07.754 06:24:14 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:29:07.754 00:29:07.754 Discovery Log Number of Records 2, Generation counter 2 00:29:07.754 =====Discovery Log Entry 0====== 00:29:07.754 trtype: tcp 00:29:07.754 adrfam: ipv4 00:29:07.754 subtype: current discovery subsystem 00:29:07.754 treq: not specified, sq flow control disable supported 00:29:07.754 portid: 1 00:29:07.754 trsvcid: 4420 00:29:07.754 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:07.754 traddr: 10.0.0.1 00:29:07.754 eflags: none 00:29:07.754 sectype: none 00:29:07.754 =====Discovery Log Entry 1====== 00:29:07.754 trtype: tcp 00:29:07.754 adrfam: ipv4 00:29:07.754 subtype: nvme subsystem 00:29:07.754 treq: not specified, sq flow control disable supported 00:29:07.754 portid: 1 00:29:07.754 trsvcid: 4420 00:29:07.754 subnqn: kernel_target 00:29:07.754 traddr: 10.0.0.1 00:29:07.754 eflags: none 00:29:07.754 sectype: none 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:07.754 06:24:14 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:07.754 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.036 Initializing NVMe Controllers 00:29:11.036 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:29:11.036 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:29:11.036 Initialization complete. Launching workers. 00:29:11.036 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 33782, failed: 0 00:29:11.036 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 33782, failed to submit 0 00:29:11.036 success 0, unsuccess 33782, failed 0 00:29:11.036 06:24:17 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:11.036 06:24:17 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:11.036 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.317 Initializing NVMe Controllers 00:29:14.317 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:29:14.317 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:29:14.317 Initialization complete. Launching workers. 00:29:14.317 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 65955, failed: 0 00:29:14.317 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 16638, failed to submit 49317 00:29:14.317 success 0, unsuccess 16638, failed 0 00:29:14.317 06:24:20 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:14.317 06:24:20 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:14.317 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.597 Initializing NVMe Controllers 00:29:17.597 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:29:17.597 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:29:17.597 Initialization complete. Launching workers. 00:29:17.597 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 64218, failed: 0 00:29:17.597 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 16042, failed to submit 48176 00:29:17.597 success 0, unsuccess 16042, failed 0 00:29:17.597 06:24:23 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:29:17.597 06:24:23 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:29:17.597 06:24:23 -- nvmf/common.sh@677 -- # echo 0 00:29:17.597 06:24:23 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:29:17.597 06:24:23 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:29:17.597 06:24:23 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:17.597 06:24:23 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:29:17.597 06:24:23 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:29:17.597 06:24:23 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:29:17.597 00:29:17.597 real 0m12.093s 00:29:17.597 user 0m4.753s 00:29:17.597 sys 0m2.609s 00:29:17.597 06:24:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:17.597 06:24:23 -- common/autotest_common.sh@10 -- # set +x 00:29:17.597 ************************************ 00:29:17.597 END TEST kernel_target_abort 00:29:17.597 ************************************ 00:29:17.597 06:24:23 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:29:17.597 06:24:23 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:29:17.597 06:24:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:17.597 06:24:23 -- nvmf/common.sh@116 -- # sync 00:29:17.597 06:24:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:17.597 06:24:23 -- nvmf/common.sh@119 -- # set +e 00:29:17.597 06:24:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:17.597 06:24:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:17.597 rmmod nvme_tcp 00:29:17.597 rmmod nvme_fabrics 00:29:17.597 rmmod nvme_keyring 00:29:17.597 06:24:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:17.597 06:24:23 -- nvmf/common.sh@123 -- # set -e 00:29:17.597 06:24:23 -- nvmf/common.sh@124 -- # return 0 00:29:17.597 06:24:23 -- nvmf/common.sh@477 -- # '[' -n 1257089 ']' 00:29:17.597 06:24:23 -- nvmf/common.sh@478 -- # killprocess 1257089 00:29:17.597 06:24:23 -- common/autotest_common.sh@926 -- # '[' -z 1257089 ']' 00:29:17.597 06:24:23 -- common/autotest_common.sh@930 -- # kill -0 1257089 00:29:17.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1257089) - No such process 00:29:17.597 06:24:23 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1257089 is not found' 00:29:17.597 Process with pid 1257089 is not found 00:29:17.597 06:24:23 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:29:17.597 06:24:23 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:18.528 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:29:18.528 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:29:18.528 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:29:18.528 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:29:18.528 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:29:18.528 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:29:18.528 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:29:18.528 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:29:18.528 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:29:18.528 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:29:18.528 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:29:18.528 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:29:18.528 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:29:18.528 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:29:18.528 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:29:18.528 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:29:18.528 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:29:18.787 06:24:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:18.787 06:24:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:18.787 06:24:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:18.787 06:24:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:18.787 06:24:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.787 06:24:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:18.787 06:24:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.688 06:24:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:20.688 00:29:20.688 real 0m35.206s 00:29:20.688 user 1m2.691s 00:29:20.688 sys 0m8.468s 00:29:20.688 06:24:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:20.688 06:24:27 -- common/autotest_common.sh@10 -- # set +x 00:29:20.688 ************************************ 00:29:20.688 END TEST nvmf_abort_qd_sizes 00:29:20.688 ************************************ 00:29:20.688 06:24:27 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:20.688 06:24:27 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:20.688 06:24:27 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:20.688 06:24:27 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:20.688 06:24:27 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:20.688 06:24:27 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:20.688 06:24:27 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:20.688 06:24:27 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:20.688 06:24:27 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:20.688 06:24:27 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:20.688 06:24:27 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:20.688 06:24:27 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:20.688 06:24:27 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:20.688 06:24:27 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:20.688 06:24:27 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:29:20.688 06:24:27 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:29:20.688 06:24:27 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:29:20.688 06:24:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:20.688 06:24:27 -- common/autotest_common.sh@10 -- # set +x 00:29:20.688 06:24:27 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:29:20.688 06:24:27 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:29:20.688 06:24:27 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:29:20.688 06:24:27 -- common/autotest_common.sh@10 -- # set +x 00:29:22.587 INFO: APP EXITING 00:29:22.587 INFO: killing all VMs 00:29:22.587 INFO: killing vhost app 00:29:22.587 INFO: EXIT DONE 00:29:23.537 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:29:23.795 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:29:23.795 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:29:23.795 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:29:23.795 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:29:23.795 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:29:23.795 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:29:23.795 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:29:23.795 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:29:23.795 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:29:23.795 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:29:23.795 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:29:23.795 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:29:23.795 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:29:23.795 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:29:23.795 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:29:23.795 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:29:25.169 Cleaning 00:29:25.169 Removing: /var/run/dpdk/spdk0/config 00:29:25.169 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:25.169 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:25.169 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:25.169 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:25.169 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:25.169 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:25.169 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:25.169 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:25.169 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:25.169 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:25.169 Removing: /var/run/dpdk/spdk1/config 00:29:25.169 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:25.169 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:25.169 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:25.169 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:25.169 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:25.169 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:25.169 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:25.169 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:25.169 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:25.169 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:25.169 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:25.169 Removing: /var/run/dpdk/spdk2/config 00:29:25.169 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:25.169 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:25.169 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:25.169 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:25.169 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:25.169 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:25.169 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:25.169 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:25.169 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:25.169 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:25.169 Removing: /var/run/dpdk/spdk3/config 00:29:25.169 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:25.169 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:25.169 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:25.169 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:25.169 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:25.169 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:25.169 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:25.169 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:25.169 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:25.169 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:25.169 Removing: /var/run/dpdk/spdk4/config 00:29:25.169 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:25.169 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:25.169 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:25.169 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:25.169 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:25.169 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:25.169 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:25.169 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:25.169 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:25.169 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:25.170 Removing: /dev/shm/bdev_svc_trace.1 00:29:25.170 Removing: /dev/shm/nvmf_trace.0 00:29:25.170 Removing: /dev/shm/spdk_tgt_trace.pid992069 00:29:25.170 Removing: /var/run/dpdk/spdk0 00:29:25.170 Removing: /var/run/dpdk/spdk1 00:29:25.170 Removing: /var/run/dpdk/spdk2 00:29:25.170 Removing: /var/run/dpdk/spdk3 00:29:25.170 Removing: /var/run/dpdk/spdk4 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1000308 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1000445 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1000884 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1001026 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1001196 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1001340 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1001504 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1001646 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1002013 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1002175 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1002544 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1002787 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1002826 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1002994 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1003132 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1003412 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1003805 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1004218 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1004437 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1004646 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1004795 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1004956 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1005217 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1005375 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1005522 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1005796 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1005945 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1006105 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1006369 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1006529 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1006678 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1006942 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1007094 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1007260 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1007418 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1007679 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1007822 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1007988 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1008246 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1008411 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1008555 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1008835 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1008973 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1009143 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1009401 00:29:25.170 Removing: /var/run/dpdk/spdk_pid1009563 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1009710 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1009997 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1010138 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1010304 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1010569 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1010730 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1010868 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1011158 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1011217 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1011426 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1013622 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1069509 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1072170 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1079270 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1082619 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1085005 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1085544 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1090504 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1090802 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1093477 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1097344 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1100094 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1106848 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1112250 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1113474 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1114283 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1124780 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1127025 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1129846 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1131063 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1132555 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1132706 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1132858 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1133194 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1133827 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1135706 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1136599 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1137053 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1140673 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1144122 00:29:25.427 Removing: /var/run/dpdk/spdk_pid1147781 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1171882 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1174616 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1178573 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1179555 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1180803 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1183385 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1185907 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1190282 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1190284 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1193219 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1193366 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1193505 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1193776 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1193906 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1195012 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1196342 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1198064 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1199281 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1200517 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1201736 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1205736 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1206079 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1207397 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1208152 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1211943 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1214084 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1217595 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1221342 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1225027 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1225573 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1226111 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1226526 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1227656 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1228189 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1228740 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1229291 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1231961 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1232101 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1235963 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1236141 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1237787 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1242940 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1242969 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1246009 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1247449 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1248888 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1249652 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1251223 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1252015 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1257558 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1258045 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1258446 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1260433 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1260847 00:29:25.428 Removing: /var/run/dpdk/spdk_pid1261251 00:29:25.428 Removing: /var/run/dpdk/spdk_pid990369 00:29:25.428 Removing: /var/run/dpdk/spdk_pid991127 00:29:25.428 Removing: /var/run/dpdk/spdk_pid992069 00:29:25.428 Removing: /var/run/dpdk/spdk_pid992549 00:29:25.428 Removing: /var/run/dpdk/spdk_pid993769 00:29:25.428 Removing: /var/run/dpdk/spdk_pid994712 00:29:25.687 Removing: /var/run/dpdk/spdk_pid995021 00:29:25.687 Removing: /var/run/dpdk/spdk_pid995219 00:29:25.687 Removing: /var/run/dpdk/spdk_pid995556 00:29:25.687 Removing: /var/run/dpdk/spdk_pid995754 00:29:25.687 Removing: /var/run/dpdk/spdk_pid995924 00:29:25.687 Removing: /var/run/dpdk/spdk_pid996190 00:29:25.687 Removing: /var/run/dpdk/spdk_pid996381 00:29:25.687 Removing: /var/run/dpdk/spdk_pid996844 00:29:25.687 Removing: /var/run/dpdk/spdk_pid999384 00:29:25.687 Removing: /var/run/dpdk/spdk_pid999560 00:29:25.687 Removing: /var/run/dpdk/spdk_pid999851 00:29:25.687 Removing: /var/run/dpdk/spdk_pid999992 00:29:25.687 Clean 00:29:25.687 killing process with pid 962292 00:29:33.795 killing process with pid 962289 00:29:33.795 killing process with pid 962291 00:29:33.795 killing process with pid 962290 00:29:33.795 06:24:39 -- common/autotest_common.sh@1436 -- # return 0 00:29:33.795 06:24:39 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:29:33.795 06:24:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:33.795 06:24:39 -- common/autotest_common.sh@10 -- # set +x 00:29:33.795 06:24:39 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:29:33.795 06:24:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:33.795 06:24:39 -- common/autotest_common.sh@10 -- # set +x 00:29:33.795 06:24:39 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:33.795 06:24:39 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:33.796 06:24:39 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:33.796 06:24:39 -- spdk/autotest.sh@394 -- # hash lcov 00:29:33.796 06:24:39 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:33.796 06:24:39 -- spdk/autotest.sh@396 -- # hostname 00:29:33.796 06:24:39 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:33.796 geninfo: WARNING: invalid characters removed from testname! 00:30:00.338 06:25:04 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:01.717 06:25:07 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:04.252 06:25:10 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:06.786 06:25:13 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:09.357 06:25:15 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:12.640 06:25:18 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:15.172 06:25:21 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:15.172 06:25:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:15.172 06:25:21 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:15.172 06:25:21 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.172 06:25:21 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.172 06:25:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.172 06:25:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.172 06:25:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.172 06:25:21 -- paths/export.sh@5 -- $ export PATH 00:30:15.172 06:25:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.172 06:25:21 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:15.172 06:25:21 -- common/autobuild_common.sh@435 -- $ date +%s 00:30:15.172 06:25:21 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720844721.XXXXXX 00:30:15.173 06:25:21 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720844721.4wCh4m 00:30:15.173 06:25:21 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:30:15.173 06:25:21 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:30:15.173 06:25:21 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:15.173 06:25:21 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:15.173 06:25:21 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:15.173 06:25:21 -- common/autobuild_common.sh@451 -- $ get_config_params 00:30:15.173 06:25:21 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:30:15.173 06:25:21 -- common/autotest_common.sh@10 -- $ set +x 00:30:15.173 06:25:21 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:30:15.173 06:25:21 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:30:15.173 06:25:21 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:15.173 06:25:21 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:15.173 06:25:21 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:15.173 06:25:21 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:15.173 06:25:21 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:15.173 06:25:21 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:15.173 06:25:21 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:15.173 06:25:21 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:15.173 06:25:21 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:15.173 + [[ -n 919893 ]] 00:30:15.173 + sudo kill 919893 00:30:15.185 [Pipeline] } 00:30:15.203 [Pipeline] // stage 00:30:15.209 [Pipeline] } 00:30:15.228 [Pipeline] // timeout 00:30:15.234 [Pipeline] } 00:30:15.252 [Pipeline] // catchError 00:30:15.258 [Pipeline] } 00:30:15.276 [Pipeline] // wrap 00:30:15.282 [Pipeline] } 00:30:15.299 [Pipeline] // catchError 00:30:15.309 [Pipeline] stage 00:30:15.311 [Pipeline] { (Epilogue) 00:30:15.326 [Pipeline] catchError 00:30:15.328 [Pipeline] { 00:30:15.343 [Pipeline] echo 00:30:15.345 Cleanup processes 00:30:15.352 [Pipeline] sh 00:30:15.638 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:15.638 1273071 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:15.653 [Pipeline] sh 00:30:15.935 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:15.935 ++ awk '{print $1}' 00:30:15.935 ++ grep -v 'sudo pgrep' 00:30:15.935 + sudo kill -9 00:30:15.935 + true 00:30:15.946 [Pipeline] sh 00:30:16.226 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:28.465 [Pipeline] sh 00:30:28.746 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:28.746 Artifacts sizes are good 00:30:28.761 [Pipeline] archiveArtifacts 00:30:28.767 Archiving artifacts 00:30:28.960 [Pipeline] sh 00:30:29.241 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:29.258 [Pipeline] cleanWs 00:30:29.269 [WS-CLEANUP] Deleting project workspace... 00:30:29.269 [WS-CLEANUP] Deferred wipeout is used... 00:30:29.275 [WS-CLEANUP] done 00:30:29.277 [Pipeline] } 00:30:29.298 [Pipeline] // catchError 00:30:29.309 [Pipeline] sh 00:30:29.582 + logger -p user.info -t JENKINS-CI 00:30:29.591 [Pipeline] } 00:30:29.604 [Pipeline] // stage 00:30:29.608 [Pipeline] } 00:30:29.620 [Pipeline] // node 00:30:29.624 [Pipeline] End of Pipeline 00:30:29.646 Finished: SUCCESS